2026-03-10 00:00:09.564729 | Job console starting 2026-03-10 00:00:09.575897 | Updating git repos 2026-03-10 00:00:09.673829 | Cloning repos into workspace 2026-03-10 00:00:09.987189 | Restoring repo states 2026-03-10 00:00:10.014964 | Merging changes 2026-03-10 00:00:10.014980 | Checking out repos 2026-03-10 00:00:10.478996 | Preparing playbooks 2026-03-10 00:00:11.502354 | Running Ansible setup 2026-03-10 00:00:20.771167 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-10 00:00:22.680965 | 2026-03-10 00:00:22.681098 | PLAY [Base pre] 2026-03-10 00:00:22.731132 | 2026-03-10 00:00:22.731267 | TASK [Setup log path fact] 2026-03-10 00:00:22.783306 | orchestrator | ok 2026-03-10 00:00:22.847123 | 2026-03-10 00:00:22.847283 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-10 00:00:22.887684 | orchestrator | ok 2026-03-10 00:00:22.906944 | 2026-03-10 00:00:22.907052 | TASK [emit-job-header : Print job information] 2026-03-10 00:00:22.990641 | # Job Information 2026-03-10 00:00:22.990807 | Ansible Version: 2.16.14 2026-03-10 00:00:22.990893 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-10 00:00:22.990930 | Pipeline: periodic-midnight 2026-03-10 00:00:22.990953 | Executor: 521e9411259a 2026-03-10 00:00:22.990973 | Triggered by: https://github.com/osism/testbed 2026-03-10 00:00:22.990995 | Event ID: 7ceccf5167b1458eb5435531ac7a90c1 2026-03-10 00:00:22.998752 | 2026-03-10 00:00:23.003760 | LOOP [emit-job-header : Print node information] 2026-03-10 00:00:23.212744 | orchestrator | ok: 2026-03-10 00:00:23.212888 | orchestrator | # Node Information 2026-03-10 00:00:23.212917 | orchestrator | Inventory Hostname: orchestrator 2026-03-10 00:00:23.212938 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-10 00:00:23.212957 | orchestrator | Username: zuul-testbed06 2026-03-10 00:00:23.212974 | orchestrator | Distro: Debian 12.13 2026-03-10 00:00:23.212994 | orchestrator | Provider: static-testbed 2026-03-10 00:00:23.213012 | orchestrator | Region: 2026-03-10 00:00:23.213030 | orchestrator | Label: testbed-orchestrator 2026-03-10 00:00:23.213046 | orchestrator | Product Name: OpenStack Nova 2026-03-10 00:00:23.213062 | orchestrator | Interface IP: 81.163.193.140 2026-03-10 00:00:23.266965 | 2026-03-10 00:00:23.267112 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-10 00:00:24.370215 | orchestrator -> localhost | changed 2026-03-10 00:00:24.376581 | 2026-03-10 00:00:24.376675 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-10 00:00:26.512914 | orchestrator -> localhost | changed 2026-03-10 00:00:26.524067 | 2026-03-10 00:00:26.524164 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-10 00:00:27.826755 | orchestrator -> localhost | ok 2026-03-10 00:00:27.833560 | 2026-03-10 00:00:27.833658 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-10 00:00:27.871422 | orchestrator | ok 2026-03-10 00:00:27.920598 | orchestrator | included: /var/lib/zuul/builds/4a5ad4ba3fd64d48834bebaf1663dbc6/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-10 00:00:27.968938 | 2026-03-10 00:00:27.969035 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-10 00:00:34.220686 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-10 00:00:34.220856 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/4a5ad4ba3fd64d48834bebaf1663dbc6/work/4a5ad4ba3fd64d48834bebaf1663dbc6_id_rsa 2026-03-10 00:00:34.220910 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/4a5ad4ba3fd64d48834bebaf1663dbc6/work/4a5ad4ba3fd64d48834bebaf1663dbc6_id_rsa.pub 2026-03-10 00:00:34.220933 | orchestrator -> localhost | The key fingerprint is: 2026-03-10 00:00:34.220953 | orchestrator -> localhost | SHA256:eSkwk2zYqKCcuZDuohf2t3WIS8ehPBlSfwHY2lBtCOg zuul-build-sshkey 2026-03-10 00:00:34.220972 | orchestrator -> localhost | The key's randomart image is: 2026-03-10 00:00:34.220999 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-10 00:00:34.221017 | orchestrator -> localhost | | ..=oo | 2026-03-10 00:00:34.221035 | orchestrator -> localhost | | .=o.o.o | 2026-03-10 00:00:34.221052 | orchestrator -> localhost | |. .o X+ .. | 2026-03-10 00:00:34.221068 | orchestrator -> localhost | |ooo.Eo.=.. o | 2026-03-10 00:00:34.221085 | orchestrator -> localhost | |++. . . S + | 2026-03-10 00:00:34.221107 | orchestrator -> localhost | |o + o * * | 2026-03-10 00:00:34.221123 | orchestrator -> localhost | | + o B = . | 2026-03-10 00:00:34.221139 | orchestrator -> localhost | |o . ...= . | 2026-03-10 00:00:34.221156 | orchestrator -> localhost | |+o .o. | 2026-03-10 00:00:34.221173 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-10 00:00:34.221217 | orchestrator -> localhost | ok: Runtime: 0:00:04.198053 2026-03-10 00:00:34.227216 | 2026-03-10 00:00:34.227306 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-10 00:00:34.276532 | orchestrator | ok 2026-03-10 00:00:34.300760 | orchestrator | included: /var/lib/zuul/builds/4a5ad4ba3fd64d48834bebaf1663dbc6/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-10 00:00:34.329109 | 2026-03-10 00:00:34.329206 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-10 00:00:34.367023 | orchestrator | skipping: Conditional result was False 2026-03-10 00:00:34.373351 | 2026-03-10 00:00:34.373429 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-10 00:00:35.328495 | orchestrator | changed 2026-03-10 00:00:35.333511 | 2026-03-10 00:00:35.343769 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-10 00:00:35.619819 | orchestrator | ok 2026-03-10 00:00:35.625052 | 2026-03-10 00:00:35.625138 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-10 00:00:36.095812 | orchestrator | ok 2026-03-10 00:00:36.101030 | 2026-03-10 00:00:36.101138 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-10 00:00:36.562959 | orchestrator | ok 2026-03-10 00:00:36.572085 | 2026-03-10 00:00:36.572173 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-10 00:00:36.608702 | orchestrator | skipping: Conditional result was False 2026-03-10 00:00:36.614337 | 2026-03-10 00:00:36.614430 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-10 00:00:37.629280 | orchestrator -> localhost | changed 2026-03-10 00:00:37.649347 | 2026-03-10 00:00:37.649447 | TASK [add-build-sshkey : Add back temp key] 2026-03-10 00:00:38.483008 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/4a5ad4ba3fd64d48834bebaf1663dbc6/work/4a5ad4ba3fd64d48834bebaf1663dbc6_id_rsa (zuul-build-sshkey) 2026-03-10 00:00:38.483188 | orchestrator -> localhost | ok: Runtime: 0:00:00.010101 2026-03-10 00:00:38.489021 | 2026-03-10 00:00:38.489116 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-10 00:00:39.061844 | orchestrator | ok 2026-03-10 00:00:39.079980 | 2026-03-10 00:00:39.080080 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-10 00:00:39.161440 | orchestrator | skipping: Conditional result was False 2026-03-10 00:00:39.294252 | 2026-03-10 00:00:39.294346 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-10 00:00:39.896195 | orchestrator | ok 2026-03-10 00:00:39.920506 | 2026-03-10 00:00:39.920609 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-10 00:00:40.000132 | orchestrator | ok 2026-03-10 00:00:40.020649 | 2026-03-10 00:00:40.020752 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-10 00:00:41.092564 | orchestrator -> localhost | ok 2026-03-10 00:00:41.099779 | 2026-03-10 00:00:41.099879 | TASK [validate-host : Collect information about the host] 2026-03-10 00:00:42.986941 | orchestrator | ok 2026-03-10 00:00:43.034795 | 2026-03-10 00:00:43.034976 | TASK [validate-host : Sanitize hostname] 2026-03-10 00:00:43.112309 | orchestrator | ok 2026-03-10 00:00:43.116658 | 2026-03-10 00:00:43.116742 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-10 00:00:44.563870 | orchestrator -> localhost | changed 2026-03-10 00:00:44.569267 | 2026-03-10 00:00:44.569357 | TASK [validate-host : Collect information about zuul worker] 2026-03-10 00:00:45.237840 | orchestrator | ok 2026-03-10 00:00:45.242118 | 2026-03-10 00:00:45.242198 | TASK [validate-host : Write out all zuul information for each host] 2026-03-10 00:00:46.176022 | orchestrator -> localhost | changed 2026-03-10 00:00:46.193526 | 2026-03-10 00:00:46.193623 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-10 00:00:46.537008 | orchestrator | ok 2026-03-10 00:00:46.556488 | 2026-03-10 00:00:46.556585 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-10 00:02:11.035812 | orchestrator | changed: 2026-03-10 00:02:11.036116 | orchestrator | .d..t...... src/ 2026-03-10 00:02:11.036154 | orchestrator | .d..t...... src/github.com/ 2026-03-10 00:02:11.036180 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-10 00:02:11.036203 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-10 00:02:11.036225 | orchestrator | RedHat.yml 2026-03-10 00:02:11.058999 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-10 00:02:11.059017 | orchestrator | RedHat.yml 2026-03-10 00:02:11.059068 | orchestrator | = 2.2.0"... 2026-03-10 00:02:25.819676 | orchestrator | - Finding latest version of hashicorp/null... 2026-03-10 00:02:25.844313 | orchestrator | - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2026-03-10 00:02:25.998125 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-10 00:02:26.488118 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-10 00:02:26.554066 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-10 00:02:27.127184 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-10 00:02:27.192766 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-10 00:02:27.880056 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-10 00:02:27.880105 | orchestrator | 2026-03-10 00:02:27.880112 | orchestrator | Providers are signed by their developers. 2026-03-10 00:02:27.880118 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-10 00:02:27.880124 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-10 00:02:27.880137 | orchestrator | 2026-03-10 00:02:27.880141 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-10 00:02:27.880150 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-10 00:02:27.880155 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-10 00:02:27.880159 | orchestrator | you run "tofu init" in the future. 2026-03-10 00:02:27.880398 | orchestrator | 2026-03-10 00:02:27.880409 | orchestrator | OpenTofu has been successfully initialized! 2026-03-10 00:02:27.880428 | orchestrator | 2026-03-10 00:02:27.880433 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-10 00:02:27.880437 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-10 00:02:27.880441 | orchestrator | should now work. 2026-03-10 00:02:27.880445 | orchestrator | 2026-03-10 00:02:27.880449 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-10 00:02:27.880457 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-10 00:02:27.880461 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-10 00:02:28.057787 | orchestrator | Created and switched to workspace "ci"! 2026-03-10 00:02:28.057838 | orchestrator | 2026-03-10 00:02:28.057844 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-10 00:02:28.057849 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-10 00:02:28.057870 | orchestrator | for this configuration. 2026-03-10 00:02:28.163606 | orchestrator | ci.auto.tfvars 2026-03-10 00:02:28.165719 | orchestrator | default_custom.tf 2026-03-10 00:02:29.154654 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-10 00:02:29.727305 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 1s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-10 00:02:30.013326 | orchestrator | 2026-03-10 00:02:30.013413 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-10 00:02:30.013425 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-10 00:02:30.013434 | orchestrator | + create 2026-03-10 00:02:30.013443 | orchestrator | <= read (data resources) 2026-03-10 00:02:30.013453 | orchestrator | 2026-03-10 00:02:30.013461 | orchestrator | OpenTofu will perform the following actions: 2026-03-10 00:02:30.014506 | orchestrator | 2026-03-10 00:02:30.014523 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-10 00:02:30.014532 | orchestrator | # (config refers to values not yet known) 2026-03-10 00:02:30.014541 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-10 00:02:30.014549 | orchestrator | + checksum = (known after apply) 2026-03-10 00:02:30.014557 | orchestrator | + created_at = (known after apply) 2026-03-10 00:02:30.014565 | orchestrator | + file = (known after apply) 2026-03-10 00:02:30.014573 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.014608 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.014616 | orchestrator | + min_disk_gb = (known after apply) 2026-03-10 00:02:30.014625 | orchestrator | + min_ram_mb = (known after apply) 2026-03-10 00:02:30.014633 | orchestrator | + most_recent = true 2026-03-10 00:02:30.014641 | orchestrator | + name = (known after apply) 2026-03-10 00:02:30.014649 | orchestrator | + protected = (known after apply) 2026-03-10 00:02:30.014657 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.014668 | orchestrator | + schema = (known after apply) 2026-03-10 00:02:30.014676 | orchestrator | + size_bytes = (known after apply) 2026-03-10 00:02:30.014684 | orchestrator | + tags = (known after apply) 2026-03-10 00:02:30.014692 | orchestrator | + updated_at = (known after apply) 2026-03-10 00:02:30.014700 | orchestrator | } 2026-03-10 00:02:30.014717 | orchestrator | 2026-03-10 00:02:30.014731 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-10 00:02:30.014745 | orchestrator | # (config refers to values not yet known) 2026-03-10 00:02:30.014757 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-10 00:02:30.014770 | orchestrator | + checksum = (known after apply) 2026-03-10 00:02:30.014782 | orchestrator | + created_at = (known after apply) 2026-03-10 00:02:30.014794 | orchestrator | + file = (known after apply) 2026-03-10 00:02:30.014808 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.014821 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.014835 | orchestrator | + min_disk_gb = (known after apply) 2026-03-10 00:02:30.014849 | orchestrator | + min_ram_mb = (known after apply) 2026-03-10 00:02:30.014862 | orchestrator | + most_recent = true 2026-03-10 00:02:30.014874 | orchestrator | + name = (known after apply) 2026-03-10 00:02:30.014882 | orchestrator | + protected = (known after apply) 2026-03-10 00:02:30.014890 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.014946 | orchestrator | + schema = (known after apply) 2026-03-10 00:02:30.014955 | orchestrator | + size_bytes = (known after apply) 2026-03-10 00:02:30.014963 | orchestrator | + tags = (known after apply) 2026-03-10 00:02:30.014971 | orchestrator | + updated_at = (known after apply) 2026-03-10 00:02:30.014979 | orchestrator | } 2026-03-10 00:02:30.014999 | orchestrator | 2026-03-10 00:02:30.015007 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-10 00:02:30.015016 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-10 00:02:30.015024 | orchestrator | + content = (known after apply) 2026-03-10 00:02:30.015032 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-10 00:02:30.015040 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-10 00:02:30.015047 | orchestrator | + content_md5 = (known after apply) 2026-03-10 00:02:30.015055 | orchestrator | + content_sha1 = (known after apply) 2026-03-10 00:02:30.015063 | orchestrator | + content_sha256 = (known after apply) 2026-03-10 00:02:30.015071 | orchestrator | + content_sha512 = (known after apply) 2026-03-10 00:02:30.015078 | orchestrator | + directory_permission = "0777" 2026-03-10 00:02:30.015086 | orchestrator | + file_permission = "0644" 2026-03-10 00:02:30.015094 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-10 00:02:30.015101 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.015108 | orchestrator | } 2026-03-10 00:02:30.015114 | orchestrator | 2026-03-10 00:02:30.015121 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-10 00:02:30.015127 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-10 00:02:30.015134 | orchestrator | + content = (known after apply) 2026-03-10 00:02:30.015140 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-10 00:02:30.015147 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-10 00:02:30.015153 | orchestrator | + content_md5 = (known after apply) 2026-03-10 00:02:30.015160 | orchestrator | + content_sha1 = (known after apply) 2026-03-10 00:02:30.015166 | orchestrator | + content_sha256 = (known after apply) 2026-03-10 00:02:30.015180 | orchestrator | + content_sha512 = (known after apply) 2026-03-10 00:02:30.015187 | orchestrator | + directory_permission = "0777" 2026-03-10 00:02:30.015193 | orchestrator | + file_permission = "0644" 2026-03-10 00:02:30.015207 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-10 00:02:30.015214 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.015221 | orchestrator | } 2026-03-10 00:02:30.015227 | orchestrator | 2026-03-10 00:02:30.015237 | orchestrator | # local_file.inventory will be created 2026-03-10 00:02:30.015243 | orchestrator | + resource "local_file" "inventory" { 2026-03-10 00:02:30.015250 | orchestrator | + content = (known after apply) 2026-03-10 00:02:30.015256 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-10 00:02:30.015263 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-10 00:02:30.015270 | orchestrator | + content_md5 = (known after apply) 2026-03-10 00:02:30.015276 | orchestrator | + content_sha1 = (known after apply) 2026-03-10 00:02:30.015283 | orchestrator | + content_sha256 = (known after apply) 2026-03-10 00:02:30.015290 | orchestrator | + content_sha512 = (known after apply) 2026-03-10 00:02:30.015296 | orchestrator | + directory_permission = "0777" 2026-03-10 00:02:30.015303 | orchestrator | + file_permission = "0644" 2026-03-10 00:02:30.015309 | orchestrator | + filename = "inventory.ci" 2026-03-10 00:02:30.015316 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.015322 | orchestrator | } 2026-03-10 00:02:30.015329 | orchestrator | 2026-03-10 00:02:30.015335 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-10 00:02:30.015342 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-10 00:02:30.015348 | orchestrator | + content = (sensitive value) 2026-03-10 00:02:30.015355 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-10 00:02:30.015362 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-10 00:02:30.015368 | orchestrator | + content_md5 = (known after apply) 2026-03-10 00:02:30.015374 | orchestrator | + content_sha1 = (known after apply) 2026-03-10 00:02:30.015381 | orchestrator | + content_sha256 = (known after apply) 2026-03-10 00:02:30.015387 | orchestrator | + content_sha512 = (known after apply) 2026-03-10 00:02:30.015394 | orchestrator | + directory_permission = "0700" 2026-03-10 00:02:30.015400 | orchestrator | + file_permission = "0600" 2026-03-10 00:02:30.015407 | orchestrator | + filename = ".id_rsa.ci" 2026-03-10 00:02:30.015413 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.015420 | orchestrator | } 2026-03-10 00:02:30.015426 | orchestrator | 2026-03-10 00:02:30.015433 | orchestrator | # null_resource.node_semaphore will be created 2026-03-10 00:02:30.015440 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-10 00:02:30.015446 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.015453 | orchestrator | } 2026-03-10 00:02:30.015461 | orchestrator | 2026-03-10 00:02:30.015468 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-10 00:02:30.015475 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-10 00:02:30.015482 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.015488 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.015495 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.015501 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.015508 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.015514 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-10 00:02:30.015521 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.015527 | orchestrator | + size = 80 2026-03-10 00:02:30.015534 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.015540 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.015547 | orchestrator | } 2026-03-10 00:02:30.015553 | orchestrator | 2026-03-10 00:02:30.015560 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-10 00:02:30.015567 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-10 00:02:30.015573 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.015580 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.015586 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.015597 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.015604 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.015610 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-10 00:02:30.015617 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.015623 | orchestrator | + size = 80 2026-03-10 00:02:30.015630 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.015636 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.015643 | orchestrator | } 2026-03-10 00:02:30.015652 | orchestrator | 2026-03-10 00:02:30.015658 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-10 00:02:30.015665 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-10 00:02:30.015672 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.015678 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.015685 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.015691 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.015698 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.015704 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-10 00:02:30.015711 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.015717 | orchestrator | + size = 80 2026-03-10 00:02:30.015724 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.015730 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.015736 | orchestrator | } 2026-03-10 00:02:30.015743 | orchestrator | 2026-03-10 00:02:30.015750 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-10 00:02:30.015756 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-10 00:02:30.015763 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.015769 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.015780 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.015793 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.015805 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.015817 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-10 00:02:30.015828 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.015840 | orchestrator | + size = 80 2026-03-10 00:02:30.015857 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.015869 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.015882 | orchestrator | } 2026-03-10 00:02:30.015921 | orchestrator | 2026-03-10 00:02:30.015929 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-10 00:02:30.015936 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-10 00:02:30.015942 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.015949 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.015955 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.015962 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.015968 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.015975 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-10 00:02:30.015981 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.015988 | orchestrator | + size = 80 2026-03-10 00:02:30.015994 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.016001 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.016007 | orchestrator | } 2026-03-10 00:02:30.016013 | orchestrator | 2026-03-10 00:02:30.016020 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-10 00:02:30.016026 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-10 00:02:30.016033 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.016040 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.016046 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.016064 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.016076 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.016087 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-10 00:02:30.016098 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.016109 | orchestrator | + size = 80 2026-03-10 00:02:30.016120 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.016131 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.016142 | orchestrator | } 2026-03-10 00:02:30.016154 | orchestrator | 2026-03-10 00:02:30.016163 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-10 00:02:30.016170 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-10 00:02:30.016177 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.016183 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.016189 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.016196 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.016202 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.016209 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-10 00:02:30.016215 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.016222 | orchestrator | + size = 80 2026-03-10 00:02:30.016228 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.016235 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.016242 | orchestrator | } 2026-03-10 00:02:30.016252 | orchestrator | 2026-03-10 00:02:30.016259 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-10 00:02:30.016266 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:30.016273 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.016280 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.016286 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.016292 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.016299 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-10 00:02:30.016306 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.016313 | orchestrator | + size = 20 2026-03-10 00:02:30.016320 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.016326 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.016333 | orchestrator | } 2026-03-10 00:02:30.016339 | orchestrator | 2026-03-10 00:02:30.016346 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-10 00:02:30.016352 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:30.016359 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.016365 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.016372 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.016378 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.016385 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-10 00:02:30.016391 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.016397 | orchestrator | + size = 20 2026-03-10 00:02:30.016404 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.016410 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.016417 | orchestrator | } 2026-03-10 00:02:30.016423 | orchestrator | 2026-03-10 00:02:30.016430 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-10 00:02:30.016437 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:30.016443 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.016449 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.016456 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.016462 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.016469 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-10 00:02:30.016475 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.016488 | orchestrator | + size = 20 2026-03-10 00:02:30.016494 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.016501 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.016507 | orchestrator | } 2026-03-10 00:02:30.016514 | orchestrator | 2026-03-10 00:02:30.016521 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-10 00:02:30.016527 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:30.016534 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.016540 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.016546 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.016557 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.016564 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-10 00:02:30.016571 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.016577 | orchestrator | + size = 20 2026-03-10 00:02:30.016584 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.016590 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.016597 | orchestrator | } 2026-03-10 00:02:30.016606 | orchestrator | 2026-03-10 00:02:30.016613 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-10 00:02:30.016619 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:30.016626 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.016632 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.016639 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.016645 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.016652 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-10 00:02:30.016659 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.016665 | orchestrator | + size = 20 2026-03-10 00:02:30.016672 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.016678 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.016685 | orchestrator | } 2026-03-10 00:02:30.016691 | orchestrator | 2026-03-10 00:02:30.016698 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-10 00:02:30.016704 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:30.016711 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.016717 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.016724 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.016730 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.016736 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-10 00:02:30.016743 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.016749 | orchestrator | + size = 20 2026-03-10 00:02:30.016756 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.016762 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.016769 | orchestrator | } 2026-03-10 00:02:30.016776 | orchestrator | 2026-03-10 00:02:30.016782 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-10 00:02:30.016789 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:30.016795 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.016801 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.016808 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.016815 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.016821 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-10 00:02:30.016827 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.016834 | orchestrator | + size = 20 2026-03-10 00:02:30.016840 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.016847 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.016853 | orchestrator | } 2026-03-10 00:02:30.016860 | orchestrator | 2026-03-10 00:02:30.016867 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-10 00:02:30.016873 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:30.016885 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.016891 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.016918 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.016926 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.016932 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-10 00:02:30.016939 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.016945 | orchestrator | + size = 20 2026-03-10 00:02:30.016952 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.016958 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.016965 | orchestrator | } 2026-03-10 00:02:30.016974 | orchestrator | 2026-03-10 00:02:30.016981 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-10 00:02:30.016988 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-10 00:02:30.016994 | orchestrator | + attachment = (known after apply) 2026-03-10 00:02:30.017001 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.017008 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.017014 | orchestrator | + metadata = (known after apply) 2026-03-10 00:02:30.017021 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-10 00:02:30.017027 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.017034 | orchestrator | + size = 20 2026-03-10 00:02:30.017040 | orchestrator | + volume_retype_policy = "never" 2026-03-10 00:02:30.017047 | orchestrator | + volume_type = "ssd" 2026-03-10 00:02:30.017054 | orchestrator | } 2026-03-10 00:02:30.017199 | orchestrator | 2026-03-10 00:02:30.017210 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-10 00:02:30.017216 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-10 00:02:30.017223 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:30.017230 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:30.017236 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:30.017243 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.017249 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.017256 | orchestrator | + config_drive = true 2026-03-10 00:02:30.017267 | orchestrator | + created = (known after apply) 2026-03-10 00:02:30.017274 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:30.017280 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-10 00:02:30.017287 | orchestrator | + force_delete = false 2026-03-10 00:02:30.017293 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:30.017300 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.017306 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.017313 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:30.017319 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:30.017326 | orchestrator | + name = "testbed-manager" 2026-03-10 00:02:30.017332 | orchestrator | + power_state = "active" 2026-03-10 00:02:30.017339 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.017345 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:30.017352 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:30.017358 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:30.017365 | orchestrator | + user_data = (sensitive value) 2026-03-10 00:02:30.017371 | orchestrator | 2026-03-10 00:02:30.017378 | orchestrator | + block_device { 2026-03-10 00:02:30.017385 | orchestrator | + boot_index = 0 2026-03-10 00:02:30.017391 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:30.017398 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:30.017404 | orchestrator | + multiattach = false 2026-03-10 00:02:30.017411 | orchestrator | + source_type = "volume" 2026-03-10 00:02:30.017417 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.017430 | orchestrator | } 2026-03-10 00:02:30.017437 | orchestrator | 2026-03-10 00:02:30.017443 | orchestrator | + network { 2026-03-10 00:02:30.017450 | orchestrator | + access_network = false 2026-03-10 00:02:30.017456 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:30.017463 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:30.017470 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:30.017476 | orchestrator | + name = (known after apply) 2026-03-10 00:02:30.017483 | orchestrator | + port = (known after apply) 2026-03-10 00:02:30.017489 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.017496 | orchestrator | } 2026-03-10 00:02:30.017503 | orchestrator | } 2026-03-10 00:02:30.017512 | orchestrator | 2026-03-10 00:02:30.017519 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-10 00:02:30.017526 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-10 00:02:30.017532 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:30.017539 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:30.017545 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:30.017552 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.017558 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.017565 | orchestrator | + config_drive = true 2026-03-10 00:02:30.017571 | orchestrator | + created = (known after apply) 2026-03-10 00:02:30.017578 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:30.017584 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-10 00:02:30.017591 | orchestrator | + force_delete = false 2026-03-10 00:02:30.017597 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:30.017604 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.017610 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.017617 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:30.017623 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:30.017630 | orchestrator | + name = "testbed-node-0" 2026-03-10 00:02:30.017636 | orchestrator | + power_state = "active" 2026-03-10 00:02:30.017643 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.017649 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:30.017656 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:30.017662 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:30.017669 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-10 00:02:30.017676 | orchestrator | 2026-03-10 00:02:30.017682 | orchestrator | + block_device { 2026-03-10 00:02:30.017689 | orchestrator | + boot_index = 0 2026-03-10 00:02:30.017695 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:30.017702 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:30.017708 | orchestrator | + multiattach = false 2026-03-10 00:02:30.017715 | orchestrator | + source_type = "volume" 2026-03-10 00:02:30.017721 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.017728 | orchestrator | } 2026-03-10 00:02:30.017735 | orchestrator | 2026-03-10 00:02:30.017741 | orchestrator | + network { 2026-03-10 00:02:30.017748 | orchestrator | + access_network = false 2026-03-10 00:02:30.017754 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:30.017761 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:30.017768 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:30.017774 | orchestrator | + name = (known after apply) 2026-03-10 00:02:30.017780 | orchestrator | + port = (known after apply) 2026-03-10 00:02:30.017787 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.017794 | orchestrator | } 2026-03-10 00:02:30.017800 | orchestrator | } 2026-03-10 00:02:30.017849 | orchestrator | 2026-03-10 00:02:30.017857 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-10 00:02:30.017864 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-10 00:02:30.017870 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:30.017882 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:30.017889 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:30.017935 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.017943 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.017949 | orchestrator | + config_drive = true 2026-03-10 00:02:30.017956 | orchestrator | + created = (known after apply) 2026-03-10 00:02:30.017963 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:30.017969 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-10 00:02:30.017976 | orchestrator | + force_delete = false 2026-03-10 00:02:30.017982 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:30.017989 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.017995 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.018002 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:30.018008 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:30.018035 | orchestrator | + name = "testbed-node-1" 2026-03-10 00:02:30.018044 | orchestrator | + power_state = "active" 2026-03-10 00:02:30.018050 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.018057 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:30.018063 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:30.018070 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:30.018081 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-10 00:02:30.018088 | orchestrator | 2026-03-10 00:02:30.018094 | orchestrator | + block_device { 2026-03-10 00:02:30.018101 | orchestrator | + boot_index = 0 2026-03-10 00:02:30.018107 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:30.018114 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:30.018120 | orchestrator | + multiattach = false 2026-03-10 00:02:30.018127 | orchestrator | + source_type = "volume" 2026-03-10 00:02:30.018133 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.018140 | orchestrator | } 2026-03-10 00:02:30.018147 | orchestrator | 2026-03-10 00:02:30.018153 | orchestrator | + network { 2026-03-10 00:02:30.018160 | orchestrator | + access_network = false 2026-03-10 00:02:30.018166 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:30.018173 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:30.018179 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:30.018186 | orchestrator | + name = (known after apply) 2026-03-10 00:02:30.018192 | orchestrator | + port = (known after apply) 2026-03-10 00:02:30.018199 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.018205 | orchestrator | } 2026-03-10 00:02:30.018212 | orchestrator | } 2026-03-10 00:02:30.018223 | orchestrator | 2026-03-10 00:02:30.018229 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-10 00:02:30.018236 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-10 00:02:30.018242 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:30.018249 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:30.018256 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:30.018263 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.018270 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.018276 | orchestrator | + config_drive = true 2026-03-10 00:02:30.018283 | orchestrator | + created = (known after apply) 2026-03-10 00:02:30.018289 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:30.018296 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-10 00:02:30.018303 | orchestrator | + force_delete = false 2026-03-10 00:02:30.018309 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:30.018316 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.018322 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.018339 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:30.018346 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:30.018353 | orchestrator | + name = "testbed-node-2" 2026-03-10 00:02:30.018359 | orchestrator | + power_state = "active" 2026-03-10 00:02:30.018366 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.018372 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:30.018379 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:30.018386 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:30.018392 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-10 00:02:30.018399 | orchestrator | 2026-03-10 00:02:30.018406 | orchestrator | + block_device { 2026-03-10 00:02:30.018412 | orchestrator | + boot_index = 0 2026-03-10 00:02:30.018419 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:30.018426 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:30.018432 | orchestrator | + multiattach = false 2026-03-10 00:02:30.018439 | orchestrator | + source_type = "volume" 2026-03-10 00:02:30.018445 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.018452 | orchestrator | } 2026-03-10 00:02:30.018459 | orchestrator | 2026-03-10 00:02:30.018465 | orchestrator | + network { 2026-03-10 00:02:30.018471 | orchestrator | + access_network = false 2026-03-10 00:02:30.018477 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:30.018483 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:30.018489 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:30.018495 | orchestrator | + name = (known after apply) 2026-03-10 00:02:30.018501 | orchestrator | + port = (known after apply) 2026-03-10 00:02:30.018507 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.018514 | orchestrator | } 2026-03-10 00:02:30.018520 | orchestrator | } 2026-03-10 00:02:30.018596 | orchestrator | 2026-03-10 00:02:30.018608 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-10 00:02:30.018615 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-10 00:02:30.018621 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:30.018627 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:30.018633 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:30.018640 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.018646 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.018652 | orchestrator | + config_drive = true 2026-03-10 00:02:30.018658 | orchestrator | + created = (known after apply) 2026-03-10 00:02:30.018664 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:30.018670 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-10 00:02:30.018676 | orchestrator | + force_delete = false 2026-03-10 00:02:30.018682 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:30.018688 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.018694 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.018700 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:30.018707 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:30.018713 | orchestrator | + name = "testbed-node-3" 2026-03-10 00:02:30.018720 | orchestrator | + power_state = "active" 2026-03-10 00:02:30.018730 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.018741 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:30.018751 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:30.018761 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:30.018772 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-10 00:02:30.018781 | orchestrator | 2026-03-10 00:02:30.018792 | orchestrator | + block_device { 2026-03-10 00:02:30.018802 | orchestrator | + boot_index = 0 2026-03-10 00:02:30.018813 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:30.018819 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:30.018831 | orchestrator | + multiattach = false 2026-03-10 00:02:30.018837 | orchestrator | + source_type = "volume" 2026-03-10 00:02:30.018843 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.018849 | orchestrator | } 2026-03-10 00:02:30.018855 | orchestrator | 2026-03-10 00:02:30.018862 | orchestrator | + network { 2026-03-10 00:02:30.018868 | orchestrator | + access_network = false 2026-03-10 00:02:30.018874 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:30.018880 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:30.018886 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:30.018892 | orchestrator | + name = (known after apply) 2026-03-10 00:02:30.018917 | orchestrator | + port = (known after apply) 2026-03-10 00:02:30.018924 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.018930 | orchestrator | } 2026-03-10 00:02:30.018936 | orchestrator | } 2026-03-10 00:02:30.018946 | orchestrator | 2026-03-10 00:02:30.018952 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-10 00:02:30.018959 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-10 00:02:30.018965 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:30.018971 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:30.018978 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:30.018984 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.018990 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.018996 | orchestrator | + config_drive = true 2026-03-10 00:02:30.019002 | orchestrator | + created = (known after apply) 2026-03-10 00:02:30.019009 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:30.019015 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-10 00:02:30.019021 | orchestrator | + force_delete = false 2026-03-10 00:02:30.019027 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:30.019033 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.019040 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.019046 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:30.019052 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:30.019058 | orchestrator | + name = "testbed-node-4" 2026-03-10 00:02:30.019064 | orchestrator | + power_state = "active" 2026-03-10 00:02:30.019070 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.019077 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:30.019083 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:30.019089 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:30.019095 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-10 00:02:30.019102 | orchestrator | 2026-03-10 00:02:30.019108 | orchestrator | + block_device { 2026-03-10 00:02:30.019114 | orchestrator | + boot_index = 0 2026-03-10 00:02:30.019120 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:30.019126 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:30.019133 | orchestrator | + multiattach = false 2026-03-10 00:02:30.019139 | orchestrator | + source_type = "volume" 2026-03-10 00:02:30.019145 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.019151 | orchestrator | } 2026-03-10 00:02:30.019157 | orchestrator | 2026-03-10 00:02:30.019163 | orchestrator | + network { 2026-03-10 00:02:30.019170 | orchestrator | + access_network = false 2026-03-10 00:02:30.019176 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:30.019182 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:30.019188 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:30.019194 | orchestrator | + name = (known after apply) 2026-03-10 00:02:30.019201 | orchestrator | + port = (known after apply) 2026-03-10 00:02:30.019207 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.019213 | orchestrator | } 2026-03-10 00:02:30.019219 | orchestrator | } 2026-03-10 00:02:30.019371 | orchestrator | 2026-03-10 00:02:30.019391 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-10 00:02:30.019403 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-10 00:02:30.019414 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-10 00:02:30.019424 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-10 00:02:30.019435 | orchestrator | + all_metadata = (known after apply) 2026-03-10 00:02:30.019444 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.019453 | orchestrator | + availability_zone = "nova" 2026-03-10 00:02:30.019463 | orchestrator | + config_drive = true 2026-03-10 00:02:30.019474 | orchestrator | + created = (known after apply) 2026-03-10 00:02:30.019485 | orchestrator | + flavor_id = (known after apply) 2026-03-10 00:02:30.019495 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-10 00:02:30.019507 | orchestrator | + force_delete = false 2026-03-10 00:02:30.019518 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-10 00:02:30.019529 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.019538 | orchestrator | + image_id = (known after apply) 2026-03-10 00:02:30.019544 | orchestrator | + image_name = (known after apply) 2026-03-10 00:02:30.019550 | orchestrator | + key_pair = "testbed" 2026-03-10 00:02:30.019556 | orchestrator | + name = "testbed-node-5" 2026-03-10 00:02:30.019562 | orchestrator | + power_state = "active" 2026-03-10 00:02:30.019568 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.019574 | orchestrator | + security_groups = (known after apply) 2026-03-10 00:02:30.019580 | orchestrator | + stop_before_destroy = false 2026-03-10 00:02:30.019586 | orchestrator | + updated = (known after apply) 2026-03-10 00:02:30.019592 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-10 00:02:30.019598 | orchestrator | 2026-03-10 00:02:30.019604 | orchestrator | + block_device { 2026-03-10 00:02:30.019610 | orchestrator | + boot_index = 0 2026-03-10 00:02:30.019616 | orchestrator | + delete_on_termination = false 2026-03-10 00:02:30.019622 | orchestrator | + destination_type = "volume" 2026-03-10 00:02:30.019628 | orchestrator | + multiattach = false 2026-03-10 00:02:30.019634 | orchestrator | + source_type = "volume" 2026-03-10 00:02:30.019640 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.019646 | orchestrator | } 2026-03-10 00:02:30.019652 | orchestrator | 2026-03-10 00:02:30.019658 | orchestrator | + network { 2026-03-10 00:02:30.019664 | orchestrator | + access_network = false 2026-03-10 00:02:30.019670 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-10 00:02:30.019676 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-10 00:02:30.019682 | orchestrator | + mac = (known after apply) 2026-03-10 00:02:30.019689 | orchestrator | + name = (known after apply) 2026-03-10 00:02:30.019695 | orchestrator | + port = (known after apply) 2026-03-10 00:02:30.019701 | orchestrator | + uuid = (known after apply) 2026-03-10 00:02:30.019707 | orchestrator | } 2026-03-10 00:02:30.019717 | orchestrator | } 2026-03-10 00:02:30.019732 | orchestrator | 2026-03-10 00:02:30.019742 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-10 00:02:30.019750 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-10 00:02:30.019759 | orchestrator | + fingerprint = (known after apply) 2026-03-10 00:02:30.019768 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.019777 | orchestrator | + name = "testbed" 2026-03-10 00:02:30.019786 | orchestrator | + private_key = (sensitive value) 2026-03-10 00:02:30.019795 | orchestrator | + public_key = (known after apply) 2026-03-10 00:02:30.019805 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.019815 | orchestrator | + user_id = (known after apply) 2026-03-10 00:02:30.019825 | orchestrator | } 2026-03-10 00:02:30.019836 | orchestrator | 2026-03-10 00:02:30.019843 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-10 00:02:30.019849 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:30.019863 | orchestrator | + device = (known after apply) 2026-03-10 00:02:30.019869 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.019875 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:30.019881 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.019892 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:30.019915 | orchestrator | } 2026-03-10 00:02:30.019921 | orchestrator | 2026-03-10 00:02:30.019927 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-10 00:02:30.019933 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:30.019939 | orchestrator | + device = (known after apply) 2026-03-10 00:02:30.019949 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.019961 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:30.019971 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.019981 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:30.019992 | orchestrator | } 2026-03-10 00:02:30.020004 | orchestrator | 2026-03-10 00:02:30.020015 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-10 00:02:30.020025 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:30.020036 | orchestrator | + device = (known after apply) 2026-03-10 00:02:30.020043 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.020049 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:30.020055 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.020061 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:30.020067 | orchestrator | } 2026-03-10 00:02:30.020073 | orchestrator | 2026-03-10 00:02:30.020079 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-10 00:02:30.020085 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:30.020091 | orchestrator | + device = (known after apply) 2026-03-10 00:02:30.020097 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.020103 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:30.020109 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.020115 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:30.020121 | orchestrator | } 2026-03-10 00:02:30.020127 | orchestrator | 2026-03-10 00:02:30.020134 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-10 00:02:30.020140 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:30.020146 | orchestrator | + device = (known after apply) 2026-03-10 00:02:30.020152 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.020158 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:30.020164 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.020170 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:30.020176 | orchestrator | } 2026-03-10 00:02:30.020182 | orchestrator | 2026-03-10 00:02:30.020188 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-10 00:02:30.020194 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:30.020201 | orchestrator | + device = (known after apply) 2026-03-10 00:02:30.020207 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.020213 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:30.020219 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.020225 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:30.020231 | orchestrator | } 2026-03-10 00:02:30.020237 | orchestrator | 2026-03-10 00:02:30.020243 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-10 00:02:30.020249 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:30.020255 | orchestrator | + device = (known after apply) 2026-03-10 00:02:30.020261 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.020267 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:30.020273 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.020284 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:30.020291 | orchestrator | } 2026-03-10 00:02:30.020301 | orchestrator | 2026-03-10 00:02:30.020307 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-10 00:02:30.020313 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:30.020319 | orchestrator | + device = (known after apply) 2026-03-10 00:02:30.020326 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.020332 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:30.020338 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.020344 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:30.020350 | orchestrator | } 2026-03-10 00:02:30.020356 | orchestrator | 2026-03-10 00:02:30.020362 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-10 00:02:30.020368 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-10 00:02:30.020374 | orchestrator | + device = (known after apply) 2026-03-10 00:02:30.020380 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.020386 | orchestrator | + instance_id = (known after apply) 2026-03-10 00:02:30.020393 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.020399 | orchestrator | + volume_id = (known after apply) 2026-03-10 00:02:30.020405 | orchestrator | } 2026-03-10 00:02:30.020411 | orchestrator | 2026-03-10 00:02:30.020417 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-10 00:02:30.020424 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-10 00:02:30.020430 | orchestrator | + fixed_ip = (known after apply) 2026-03-10 00:02:30.020436 | orchestrator | + floating_ip = (known after apply) 2026-03-10 00:02:30.020442 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.020448 | orchestrator | + port_id = (known after apply) 2026-03-10 00:02:30.020455 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.020461 | orchestrator | } 2026-03-10 00:02:30.020467 | orchestrator | 2026-03-10 00:02:30.020473 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-10 00:02:30.020479 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-10 00:02:30.020485 | orchestrator | + address = (known after apply) 2026-03-10 00:02:30.020491 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.020501 | orchestrator | + dns_domain = (known after apply) 2026-03-10 00:02:30.020507 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:30.020513 | orchestrator | + fixed_ip = (known after apply) 2026-03-10 00:02:30.020519 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.020525 | orchestrator | + pool = "public" 2026-03-10 00:02:30.020532 | orchestrator | + port_id = (known after apply) 2026-03-10 00:02:30.020538 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.020544 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:30.020549 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.020559 | orchestrator | } 2026-03-10 00:02:30.020569 | orchestrator | 2026-03-10 00:02:30.020579 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-10 00:02:30.020589 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-10 00:02:30.020598 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:30.020608 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.020618 | orchestrator | + availability_zone_hints = [ 2026-03-10 00:02:30.020628 | orchestrator | + "nova", 2026-03-10 00:02:30.020638 | orchestrator | ] 2026-03-10 00:02:30.020648 | orchestrator | + dns_domain = (known after apply) 2026-03-10 00:02:30.020659 | orchestrator | + external = (known after apply) 2026-03-10 00:02:30.020669 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.020679 | orchestrator | + mtu = (known after apply) 2026-03-10 00:02:30.020686 | orchestrator | + name = "net-testbed-management" 2026-03-10 00:02:30.020692 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:30.020704 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:30.020710 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.020716 | orchestrator | + shared = (known after apply) 2026-03-10 00:02:30.020722 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.020728 | orchestrator | + transparent_vlan = (known after apply) 2026-03-10 00:02:30.020734 | orchestrator | 2026-03-10 00:02:30.020740 | orchestrator | + segments (known after apply) 2026-03-10 00:02:30.020747 | orchestrator | } 2026-03-10 00:02:30.020756 | orchestrator | 2026-03-10 00:02:30.020762 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-10 00:02:30.020768 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-10 00:02:30.020774 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:30.020781 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:30.020787 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:30.020793 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.020799 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:30.020805 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:30.020811 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:30.020817 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:30.020823 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.020829 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:30.020835 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:30.020841 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:30.020847 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:30.020853 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.020859 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:30.020865 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.020871 | orchestrator | 2026-03-10 00:02:30.020877 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.020883 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:30.020890 | orchestrator | } 2026-03-10 00:02:30.020933 | orchestrator | 2026-03-10 00:02:30.020941 | orchestrator | + binding (known after apply) 2026-03-10 00:02:30.020948 | orchestrator | 2026-03-10 00:02:30.020954 | orchestrator | + fixed_ip { 2026-03-10 00:02:30.020960 | orchestrator | + ip_address = "192.168.16.5" 2026-03-10 00:02:30.020967 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:30.020973 | orchestrator | } 2026-03-10 00:02:30.020979 | orchestrator | } 2026-03-10 00:02:30.020986 | orchestrator | 2026-03-10 00:02:30.020992 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-10 00:02:30.020999 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-10 00:02:30.021005 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:30.021011 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:30.021018 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:30.021024 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.021030 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:30.021036 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:30.021043 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:30.021049 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:30.021055 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.021061 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:30.021068 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:30.021074 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:30.021080 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:30.021087 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.021097 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:30.021104 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.021110 | orchestrator | 2026-03-10 00:02:30.021116 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.021123 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-10 00:02:30.021129 | orchestrator | } 2026-03-10 00:02:30.021135 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.021142 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:30.021148 | orchestrator | } 2026-03-10 00:02:30.021154 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.021161 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-10 00:02:30.021167 | orchestrator | } 2026-03-10 00:02:30.021173 | orchestrator | 2026-03-10 00:02:30.021180 | orchestrator | + binding (known after apply) 2026-03-10 00:02:30.021186 | orchestrator | 2026-03-10 00:02:30.021192 | orchestrator | + fixed_ip { 2026-03-10 00:02:30.021199 | orchestrator | + ip_address = "192.168.16.10" 2026-03-10 00:02:30.021205 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:30.021212 | orchestrator | } 2026-03-10 00:02:30.021218 | orchestrator | } 2026-03-10 00:02:30.021228 | orchestrator | 2026-03-10 00:02:30.021235 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-10 00:02:30.021241 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-10 00:02:30.021252 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:30.021257 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:30.021263 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:30.021268 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.021274 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:30.021279 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:30.021285 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:30.021290 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:30.021296 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.021301 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:30.021307 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:30.021312 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:30.021318 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:30.021323 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.021329 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:30.021334 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.021340 | orchestrator | 2026-03-10 00:02:30.021345 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.021351 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-10 00:02:30.021356 | orchestrator | } 2026-03-10 00:02:30.021362 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.021367 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:30.021373 | orchestrator | } 2026-03-10 00:02:30.021378 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.021384 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-10 00:02:30.021389 | orchestrator | } 2026-03-10 00:02:30.021395 | orchestrator | 2026-03-10 00:02:30.021400 | orchestrator | + binding (known after apply) 2026-03-10 00:02:30.021405 | orchestrator | 2026-03-10 00:02:30.021411 | orchestrator | + fixed_ip { 2026-03-10 00:02:30.021416 | orchestrator | + ip_address = "192.168.16.11" 2026-03-10 00:02:30.021422 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:30.021428 | orchestrator | } 2026-03-10 00:02:30.021433 | orchestrator | } 2026-03-10 00:02:30.021439 | orchestrator | 2026-03-10 00:02:30.021444 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-10 00:02:30.021450 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-10 00:02:30.021455 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:30.021461 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:30.021466 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:30.021472 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.021482 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:30.021487 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:30.021493 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:30.021498 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:30.021504 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.021509 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:30.021515 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:30.021520 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:30.021525 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:30.021531 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.021536 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:30.021542 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.021547 | orchestrator | 2026-03-10 00:02:30.021553 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.021558 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-10 00:02:30.021564 | orchestrator | } 2026-03-10 00:02:30.021569 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.021575 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:30.021580 | orchestrator | } 2026-03-10 00:02:30.021586 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.021591 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-10 00:02:30.021597 | orchestrator | } 2026-03-10 00:02:30.021602 | orchestrator | 2026-03-10 00:02:30.021608 | orchestrator | + binding (known after apply) 2026-03-10 00:02:30.021613 | orchestrator | 2026-03-10 00:02:30.021619 | orchestrator | + fixed_ip { 2026-03-10 00:02:30.021624 | orchestrator | + ip_address = "192.168.16.12" 2026-03-10 00:02:30.021630 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:30.021635 | orchestrator | } 2026-03-10 00:02:30.021641 | orchestrator | } 2026-03-10 00:02:30.021646 | orchestrator | 2026-03-10 00:02:30.021652 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-10 00:02:30.021657 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-10 00:02:30.021663 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:30.021668 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:30.021674 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:30.021679 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.021685 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:30.021690 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:30.021696 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:30.021701 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:30.021707 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.021712 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:30.021718 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:30.021723 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:30.021729 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:30.021734 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.021740 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:30.021745 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.021751 | orchestrator | 2026-03-10 00:02:30.021756 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.021762 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-10 00:02:30.021767 | orchestrator | } 2026-03-10 00:02:30.021773 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.021778 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:30.021784 | orchestrator | } 2026-03-10 00:02:30.021789 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.021795 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-10 00:02:30.021800 | orchestrator | } 2026-03-10 00:02:30.021806 | orchestrator | 2026-03-10 00:02:30.021820 | orchestrator | + binding (known after apply) 2026-03-10 00:02:30.021826 | orchestrator | 2026-03-10 00:02:30.021831 | orchestrator | + fixed_ip { 2026-03-10 00:02:30.021837 | orchestrator | + ip_address = "192.168.16.13" 2026-03-10 00:02:30.021842 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:30.021848 | orchestrator | } 2026-03-10 00:02:30.021854 | orchestrator | } 2026-03-10 00:02:30.021859 | orchestrator | 2026-03-10 00:02:30.021865 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-10 00:02:30.021870 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-10 00:02:30.021876 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:30.021881 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:30.021887 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:30.021893 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.021912 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:30.021917 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:30.021923 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:30.021928 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:30.021936 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.021942 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:30.021947 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:30.021952 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:30.021958 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:30.021963 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.021968 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:30.021974 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.021980 | orchestrator | 2026-03-10 00:02:30.021986 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.021994 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-10 00:02:30.022000 | orchestrator | } 2026-03-10 00:02:30.022005 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.022010 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:30.022391 | orchestrator | } 2026-03-10 00:02:30.022398 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.022444 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-10 00:02:30.030046 | orchestrator | } 2026-03-10 00:02:30.030068 | orchestrator | 2026-03-10 00:02:30.030076 | orchestrator | + binding (known after apply) 2026-03-10 00:02:30.030084 | orchestrator | 2026-03-10 00:02:30.030092 | orchestrator | + fixed_ip { 2026-03-10 00:02:30.030100 | orchestrator | + ip_address = "192.168.16.14" 2026-03-10 00:02:30.030107 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:30.030115 | orchestrator | } 2026-03-10 00:02:30.030123 | orchestrator | } 2026-03-10 00:02:30.030132 | orchestrator | 2026-03-10 00:02:30.030138 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-10 00:02:30.030143 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-10 00:02:30.030148 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:30.030153 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-10 00:02:30.030158 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-10 00:02:30.030163 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.030167 | orchestrator | + device_id = (known after apply) 2026-03-10 00:02:30.030172 | orchestrator | + device_owner = (known after apply) 2026-03-10 00:02:30.030177 | orchestrator | + dns_assignment = (known after apply) 2026-03-10 00:02:30.030181 | orchestrator | + dns_name = (known after apply) 2026-03-10 00:02:30.030186 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.030191 | orchestrator | + mac_address = (known after apply) 2026-03-10 00:02:30.030195 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:30.030199 | orchestrator | + port_security_enabled = (known after apply) 2026-03-10 00:02:30.030204 | orchestrator | + qos_policy_id = (known after apply) 2026-03-10 00:02:30.030220 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.030225 | orchestrator | + security_group_ids = (known after apply) 2026-03-10 00:02:30.030229 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.030233 | orchestrator | 2026-03-10 00:02:30.030238 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.030244 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-10 00:02:30.030252 | orchestrator | } 2026-03-10 00:02:30.030259 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.030265 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-10 00:02:30.030273 | orchestrator | } 2026-03-10 00:02:30.030280 | orchestrator | + allowed_address_pairs { 2026-03-10 00:02:30.030288 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-10 00:02:30.030296 | orchestrator | } 2026-03-10 00:02:30.030304 | orchestrator | 2026-03-10 00:02:30.030310 | orchestrator | + binding (known after apply) 2026-03-10 00:02:30.030315 | orchestrator | 2026-03-10 00:02:30.030320 | orchestrator | + fixed_ip { 2026-03-10 00:02:30.030324 | orchestrator | + ip_address = "192.168.16.15" 2026-03-10 00:02:30.030329 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:30.030333 | orchestrator | } 2026-03-10 00:02:30.030338 | orchestrator | } 2026-03-10 00:02:30.030342 | orchestrator | 2026-03-10 00:02:30.030347 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-10 00:02:30.030352 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-10 00:02:30.030356 | orchestrator | + force_destroy = false 2026-03-10 00:02:30.030361 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.030365 | orchestrator | + port_id = (known after apply) 2026-03-10 00:02:30.030370 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.030375 | orchestrator | + router_id = (known after apply) 2026-03-10 00:02:30.030379 | orchestrator | + subnet_id = (known after apply) 2026-03-10 00:02:30.030384 | orchestrator | } 2026-03-10 00:02:30.030388 | orchestrator | 2026-03-10 00:02:30.030393 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-10 00:02:30.030397 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-10 00:02:30.030402 | orchestrator | + admin_state_up = (known after apply) 2026-03-10 00:02:30.030406 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.030411 | orchestrator | + availability_zone_hints = [ 2026-03-10 00:02:30.030415 | orchestrator | + "nova", 2026-03-10 00:02:30.030420 | orchestrator | ] 2026-03-10 00:02:30.030424 | orchestrator | + distributed = (known after apply) 2026-03-10 00:02:30.030429 | orchestrator | + enable_snat = (known after apply) 2026-03-10 00:02:30.030433 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-10 00:02:30.030437 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-10 00:02:30.030441 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.030445 | orchestrator | + name = "testbed" 2026-03-10 00:02:30.030450 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.030454 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.030458 | orchestrator | 2026-03-10 00:02:30.030470 | orchestrator | + external_fixed_ip (known after apply) 2026-03-10 00:02:30.030475 | orchestrator | } 2026-03-10 00:02:30.030479 | orchestrator | 2026-03-10 00:02:30.030483 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-10 00:02:30.030488 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-10 00:02:30.030492 | orchestrator | + description = "ssh" 2026-03-10 00:02:30.030496 | orchestrator | + direction = "ingress" 2026-03-10 00:02:30.030500 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:30.030504 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.030508 | orchestrator | + port_range_max = 22 2026-03-10 00:02:30.030512 | orchestrator | + port_range_min = 22 2026-03-10 00:02:30.030517 | orchestrator | + protocol = "tcp" 2026-03-10 00:02:30.030521 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.030530 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:30.030534 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:30.030538 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:30.030542 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:30.030546 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.030550 | orchestrator | } 2026-03-10 00:02:30.030554 | orchestrator | 2026-03-10 00:02:30.030558 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-10 00:02:30.030562 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-10 00:02:30.030566 | orchestrator | + description = "wireguard" 2026-03-10 00:02:30.030570 | orchestrator | + direction = "ingress" 2026-03-10 00:02:30.030574 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:30.030579 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.030583 | orchestrator | + port_range_max = 51820 2026-03-10 00:02:30.030587 | orchestrator | + port_range_min = 51820 2026-03-10 00:02:30.030591 | orchestrator | + protocol = "udp" 2026-03-10 00:02:30.030595 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.030599 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:30.030603 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:30.030608 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:30.030612 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:30.030616 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.030620 | orchestrator | } 2026-03-10 00:02:30.030624 | orchestrator | 2026-03-10 00:02:30.030628 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-10 00:02:30.030632 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-10 00:02:30.030641 | orchestrator | + direction = "ingress" 2026-03-10 00:02:30.030645 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:30.030649 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.030653 | orchestrator | + protocol = "tcp" 2026-03-10 00:02:30.030657 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.030661 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:30.030665 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:30.030669 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-10 00:02:30.030673 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:30.030677 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.030681 | orchestrator | } 2026-03-10 00:02:30.030685 | orchestrator | 2026-03-10 00:02:30.030689 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-10 00:02:30.030694 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-10 00:02:30.030698 | orchestrator | + direction = "ingress" 2026-03-10 00:02:30.030702 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:30.030706 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.030713 | orchestrator | + protocol = "udp" 2026-03-10 00:02:30.030719 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.030725 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:30.030732 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:30.030738 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-10 00:02:30.030745 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:30.030751 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.030757 | orchestrator | } 2026-03-10 00:02:30.030763 | orchestrator | 2026-03-10 00:02:30.030770 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-10 00:02:30.030781 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-10 00:02:30.030787 | orchestrator | + direction = "ingress" 2026-03-10 00:02:30.030794 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:30.030800 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.030807 | orchestrator | + protocol = "icmp" 2026-03-10 00:02:30.030814 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.030820 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:30.030827 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:30.030833 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:30.030837 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:30.030841 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.030845 | orchestrator | } 2026-03-10 00:02:30.030850 | orchestrator | 2026-03-10 00:02:30.030854 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-10 00:02:30.030858 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-10 00:02:30.030862 | orchestrator | + direction = "ingress" 2026-03-10 00:02:30.030866 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:30.030870 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.030874 | orchestrator | + protocol = "tcp" 2026-03-10 00:02:30.030887 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.030891 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:30.030907 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:30.030911 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:30.030915 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:30.030920 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.030924 | orchestrator | } 2026-03-10 00:02:30.030928 | orchestrator | 2026-03-10 00:02:30.030932 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-10 00:02:30.030936 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-10 00:02:30.030940 | orchestrator | + direction = "ingress" 2026-03-10 00:02:30.030944 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:30.030948 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.030952 | orchestrator | + protocol = "udp" 2026-03-10 00:02:30.030956 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.030960 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:30.030964 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:30.030969 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:30.030973 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:30.030977 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.030981 | orchestrator | } 2026-03-10 00:02:30.030985 | orchestrator | 2026-03-10 00:02:30.030989 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-10 00:02:30.030993 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-10 00:02:30.030997 | orchestrator | + direction = "ingress" 2026-03-10 00:02:30.031002 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:30.031006 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.031010 | orchestrator | + protocol = "icmp" 2026-03-10 00:02:30.031014 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.031018 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:30.031022 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:30.031026 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:30.031030 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:30.031034 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.031042 | orchestrator | } 2026-03-10 00:02:30.031046 | orchestrator | 2026-03-10 00:02:30.031050 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-10 00:02:30.031054 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-10 00:02:30.031058 | orchestrator | + description = "vrrp" 2026-03-10 00:02:30.031062 | orchestrator | + direction = "ingress" 2026-03-10 00:02:30.031066 | orchestrator | + ethertype = "IPv4" 2026-03-10 00:02:30.031070 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.031075 | orchestrator | + protocol = "112" 2026-03-10 00:02:30.031079 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.031083 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-10 00:02:30.031087 | orchestrator | + remote_group_id = (known after apply) 2026-03-10 00:02:30.031091 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-10 00:02:30.031095 | orchestrator | + security_group_id = (known after apply) 2026-03-10 00:02:30.031099 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.031103 | orchestrator | } 2026-03-10 00:02:30.031107 | orchestrator | 2026-03-10 00:02:30.031111 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-10 00:02:30.031116 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-10 00:02:30.031120 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.031124 | orchestrator | + description = "management security group" 2026-03-10 00:02:30.031128 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.031132 | orchestrator | + name = "testbed-management" 2026-03-10 00:02:30.031136 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.031140 | orchestrator | + stateful = (known after apply) 2026-03-10 00:02:30.031144 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.031148 | orchestrator | } 2026-03-10 00:02:30.031152 | orchestrator | 2026-03-10 00:02:30.031156 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-10 00:02:30.031160 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-10 00:02:30.031165 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.031169 | orchestrator | + description = "node security group" 2026-03-10 00:02:30.031173 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.031177 | orchestrator | + name = "testbed-node" 2026-03-10 00:02:30.031181 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.031185 | orchestrator | + stateful = (known after apply) 2026-03-10 00:02:30.031189 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.031193 | orchestrator | } 2026-03-10 00:02:30.031197 | orchestrator | 2026-03-10 00:02:30.031201 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-10 00:02:30.031205 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-10 00:02:30.031209 | orchestrator | + all_tags = (known after apply) 2026-03-10 00:02:30.031213 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-10 00:02:30.031217 | orchestrator | + dns_nameservers = [ 2026-03-10 00:02:30.031222 | orchestrator | + "8.8.8.8", 2026-03-10 00:02:30.031226 | orchestrator | + "9.9.9.9", 2026-03-10 00:02:30.031230 | orchestrator | ] 2026-03-10 00:02:30.031234 | orchestrator | + enable_dhcp = true 2026-03-10 00:02:30.031238 | orchestrator | + gateway_ip = (known after apply) 2026-03-10 00:02:30.031246 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.031250 | orchestrator | + ip_version = 4 2026-03-10 00:02:30.031254 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-10 00:02:30.031258 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-10 00:02:30.031262 | orchestrator | + name = "subnet-testbed-management" 2026-03-10 00:02:30.031266 | orchestrator | + network_id = (known after apply) 2026-03-10 00:02:30.031270 | orchestrator | + no_gateway = false 2026-03-10 00:02:30.031274 | orchestrator | + region = (known after apply) 2026-03-10 00:02:30.031278 | orchestrator | + service_types = (known after apply) 2026-03-10 00:02:30.031285 | orchestrator | + tenant_id = (known after apply) 2026-03-10 00:02:30.031290 | orchestrator | 2026-03-10 00:02:30.031297 | orchestrator | + allocation_pool { 2026-03-10 00:02:30.031301 | orchestrator | + end = "192.168.31.250" 2026-03-10 00:02:30.031306 | orchestrator | + start = "192.168.31.200" 2026-03-10 00:02:30.031310 | orchestrator | } 2026-03-10 00:02:30.031314 | orchestrator | } 2026-03-10 00:02:30.031318 | orchestrator | 2026-03-10 00:02:30.031322 | orchestrator | # terraform_data.image will be created 2026-03-10 00:02:30.031326 | orchestrator | + resource "terraform_data" "image" { 2026-03-10 00:02:30.031330 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.031334 | orchestrator | + input = "Ubuntu 24.04" 2026-03-10 00:02:30.031338 | orchestrator | + output = (known after apply) 2026-03-10 00:02:30.031343 | orchestrator | } 2026-03-10 00:02:30.031347 | orchestrator | 2026-03-10 00:02:30.031351 | orchestrator | # terraform_data.image_node will be created 2026-03-10 00:02:30.031355 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-10 00:02:30.031359 | orchestrator | + id = (known after apply) 2026-03-10 00:02:30.031363 | orchestrator | + input = "Ubuntu 24.04" 2026-03-10 00:02:30.031367 | orchestrator | + output = (known after apply) 2026-03-10 00:02:30.031371 | orchestrator | } 2026-03-10 00:02:30.031375 | orchestrator | 2026-03-10 00:02:30.031379 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-10 00:02:30.031383 | orchestrator | 2026-03-10 00:02:30.031387 | orchestrator | Changes to Outputs: 2026-03-10 00:02:30.031391 | orchestrator | + manager_address = (sensitive value) 2026-03-10 00:02:30.031395 | orchestrator | + private_key = (sensitive value) 2026-03-10 00:02:30.282340 | orchestrator | terraform_data.image_node: Creating... 2026-03-10 00:02:30.282436 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=e3d12a75-c5a9-5989-3c23-69792de17804] 2026-03-10 00:02:30.282452 | orchestrator | terraform_data.image: Creating... 2026-03-10 00:02:30.282465 | orchestrator | terraform_data.image: Creation complete after 0s [id=7f61bc93-7b2c-beee-8d62-9292d39df9bd] 2026-03-10 00:02:30.305763 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-10 00:02:30.319306 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-10 00:02:30.319474 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-10 00:02:30.320543 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-10 00:02:30.320722 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-10 00:02:30.321178 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-10 00:02:30.322205 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-10 00:02:30.331569 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-10 00:02:30.331643 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-10 00:02:30.340622 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-10 00:02:30.787459 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-10 00:02:30.804485 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-10 00:02:30.804570 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-10 00:02:30.808880 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-10 00:02:30.855333 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-10 00:02:30.860697 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-10 00:02:31.495231 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=79108f67-2047-4fe1-8be0-8a9aee54b031] 2026-03-10 00:02:31.504415 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-10 00:02:34.009762 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=8f76f090-a1e0-42c3-8072-1f51d4df9a8c] 2026-03-10 00:02:34.016308 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-10 00:02:34.042112 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 4s [id=b94fdc5f-2b9b-46a8-a60f-74e41f269a0d] 2026-03-10 00:02:34.046261 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-10 00:02:34.070574 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=e4712c11-e6a0-4829-954c-3e21e73d266a] 2026-03-10 00:02:34.074174 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 4s [id=3e39970d-8644-42a9-a13b-932f32b0237f] 2026-03-10 00:02:34.077652 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-10 00:02:34.082371 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-10 00:02:34.097751 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=5b638158-044f-4e2c-a80d-2256f7b00733] 2026-03-10 00:02:34.102661 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-10 00:02:34.126685 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=32f512e5-1c04-4680-91d7-4268581c2350] 2026-03-10 00:02:34.134816 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-10 00:02:34.208718 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=525599b5-6362-4aac-a0b3-94bd4cb39972] 2026-03-10 00:02:34.222859 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-10 00:02:34.229046 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=409a437e4a256462dc3768e4eb71b2217d6799e3] 2026-03-10 00:02:34.235617 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-10 00:02:34.247373 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=4c646288448facf3db4ef7b120abdcf710f33bed] 2026-03-10 00:02:34.256481 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-10 00:02:34.288499 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=21ab9d1e-083b-4748-865b-4e7341aec385] 2026-03-10 00:02:34.352394 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=885a647d-e739-4ea9-ae01-9c2ce04d6822] 2026-03-10 00:02:34.877465 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=b6c376d3-f3e9-4f38-b320-793235a9f6c4] 2026-03-10 00:02:36.161440 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=7369627e-7e61-4937-8aa3-6e6146015047] 2026-03-10 00:02:36.161473 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-10 00:02:37.533036 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=3d5f2fcb-5fbf-4e93-acf4-14417225e954] 2026-03-10 00:02:37.551416 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=b98669d5-adca-4914-bd0a-18edeba10c2d] 2026-03-10 00:02:37.612899 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 4s [id=b0fbedb1-1079-4b81-9d18-c7f1d1a1550b] 2026-03-10 00:02:37.629170 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b] 2026-03-10 00:02:37.692815 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=edac3745-ba83-4026-8817-dc6a4f8e99fd] 2026-03-10 00:02:37.699224 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=71037f65-dbb1-4725-897c-91d536174aba] 2026-03-10 00:02:43.451084 | orchestrator | openstack_networking_router_v2.router: Creation complete after 8s [id=c306fdf0-5d88-47c0-b858-6b0e24b944d1] 2026-03-10 00:02:43.456484 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-10 00:02:43.456810 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-10 00:02:43.459428 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-10 00:02:43.732748 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=f9cc49e8-40dd-458b-85b4-16bbea83e25b] 2026-03-10 00:02:43.738474 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-10 00:02:43.738557 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-10 00:02:43.738732 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-10 00:02:43.740976 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-10 00:02:43.742645 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-10 00:02:43.743997 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-10 00:02:43.755062 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-10 00:02:43.755207 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-10 00:02:43.810935 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=25b1a1b2-b3b4-441d-be05-7c84275db6c7] 2026-03-10 00:02:43.823895 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-10 00:02:43.973009 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=3d377636-2c20-4027-9fcf-c56ff6d2065c] 2026-03-10 00:02:43.986238 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-10 00:02:44.170605 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=2f8e47be-c5f2-4428-b255-6aad7e74f298] 2026-03-10 00:02:44.181544 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-10 00:02:44.337728 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=47b733c3-051c-457f-8cac-3e7589011d70] 2026-03-10 00:02:44.343537 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-10 00:02:44.557166 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=c59eb170-01bd-4539-9ec6-6fd8894fbc9b] 2026-03-10 00:02:44.562272 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-10 00:02:44.924407 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=967d8da8-429e-4970-8eea-5230448443b0] 2026-03-10 00:02:44.934151 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-10 00:02:45.007279 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=b67b828d-98b6-4663-98d7-1aa94d4fd208] 2026-03-10 00:02:45.017245 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-10 00:02:45.022201 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=a52060ed-8fcc-4010-9cf3-7499ae33cad6] 2026-03-10 00:02:45.027835 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-10 00:02:45.055712 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=2a5836c0-01c0-4e38-a3a1-20109c44529b] 2026-03-10 00:02:45.115032 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=f215b170-7ec1-4146-b79e-0c54e2e07a73] 2026-03-10 00:02:45.176001 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=39265d01-758a-4451-8a7f-507aa5b64832] 2026-03-10 00:02:45.411389 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=72e28d69-63b9-4539-a8ea-9f863ec2c6b3] 2026-03-10 00:02:45.816345 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 2s [id=5579abe0-66d8-42fd-8bf2-75aba0531de8] 2026-03-10 00:02:46.019938 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=161f2bc3-2675-45f3-8a1d-39bac52114a6] 2026-03-10 00:02:46.196985 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=3cbc42fb-9649-4195-b5d0-4e4602671f14] 2026-03-10 00:02:46.379714 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 1s [id=b7275156-8adb-4454-aa35-26f3761e361f] 2026-03-10 00:02:47.287009 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 2s [id=37cb6770-ca8c-4588-b883-a1ff49225ee5] 2026-03-10 00:02:47.620029 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 5s [id=45aed5e2-a62a-4a7d-882f-dcd9e901c0df] 2026-03-10 00:02:47.640277 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-10 00:02:47.648972 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-10 00:02:47.652959 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-10 00:02:47.653070 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-10 00:02:47.664630 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-10 00:02:47.669316 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-10 00:02:47.676274 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-10 00:02:49.784155 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=f0a36874-df13-41d6-80cc-5fd038c42f02] 2026-03-10 00:02:49.795195 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-10 00:02:49.795972 | orchestrator | local_file.inventory: Creating... 2026-03-10 00:02:49.800963 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-10 00:02:49.857617 | orchestrator | local_file.inventory: Creation complete after 0s [id=56bc032664212f7d3af0eb607338bae1ccab65fd] 2026-03-10 00:02:49.859858 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=98979c7e4556a4db47b901b800a0002793acb55b] 2026-03-10 00:02:50.798651 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=f0a36874-df13-41d6-80cc-5fd038c42f02] 2026-03-10 00:02:57.653614 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-10 00:02:57.654709 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-10 00:02:57.654736 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-10 00:02:57.665092 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-10 00:02:57.674375 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-10 00:02:57.676565 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-10 00:03:07.662341 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-10 00:03:07.662450 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-10 00:03:07.662477 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-10 00:03:07.665729 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-10 00:03:07.675224 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-10 00:03:07.677523 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-10 00:03:17.671850 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-10 00:03:17.672010 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2026-03-10 00:03:17.672027 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-10 00:03:17.672049 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-10 00:03:17.676380 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-10 00:03:17.677564 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-10 00:03:18.405801 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 30s [id=d2804c4c-f2e5-4a2f-9796-3eb2d11e3a63] 2026-03-10 00:03:27.680883 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [40s elapsed] 2026-03-10 00:03:27.681029 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [40s elapsed] 2026-03-10 00:03:27.681045 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [40s elapsed] 2026-03-10 00:03:27.681057 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [40s elapsed] 2026-03-10 00:03:27.681115 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [40s elapsed] 2026-03-10 00:03:28.385410 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 40s [id=b112eba3-9e9d-43fb-8f66-90fb2851b0f5] 2026-03-10 00:03:37.690201 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [50s elapsed] 2026-03-10 00:03:37.690328 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [50s elapsed] 2026-03-10 00:03:37.690346 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [50s elapsed] 2026-03-10 00:03:37.690359 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [50s elapsed] 2026-03-10 00:03:38.449694 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 50s [id=f7348ba5-908a-4db4-a856-6d4500273c14] 2026-03-10 00:03:38.543163 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 51s [id=5fbc965e-523f-401e-b363-c5a62dea9227] 2026-03-10 00:03:38.740796 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 51s [id=93211e5e-3e62-41bd-a64f-6cea18818b0c] 2026-03-10 00:03:39.118150 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 51s [id=d45542dd-9abb-4e97-b885-9ca8367915c3] 2026-03-10 00:03:39.141406 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-10 00:03:39.148378 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=4283990126689461776] 2026-03-10 00:03:39.151583 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-10 00:03:39.165368 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-10 00:03:39.168630 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-10 00:03:39.172069 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-10 00:03:39.181078 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-10 00:03:39.184613 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-10 00:03:39.195809 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-10 00:03:39.219565 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-10 00:03:39.226508 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-10 00:03:39.231049 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-10 00:03:42.593891 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 4s [id=d45542dd-9abb-4e97-b885-9ca8367915c3/3e39970d-8644-42a9-a13b-932f32b0237f] 2026-03-10 00:03:42.596346 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 4s [id=f7348ba5-908a-4db4-a856-6d4500273c14/21ab9d1e-083b-4748-865b-4e7341aec385] 2026-03-10 00:03:42.624072 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 4s [id=f7348ba5-908a-4db4-a856-6d4500273c14/32f512e5-1c04-4680-91d7-4268581c2350] 2026-03-10 00:03:42.628482 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 4s [id=93211e5e-3e62-41bd-a64f-6cea18818b0c/5b638158-044f-4e2c-a80d-2256f7b00733] 2026-03-10 00:03:42.659738 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 4s [id=d45542dd-9abb-4e97-b885-9ca8367915c3/885a647d-e739-4ea9-ae01-9c2ce04d6822] 2026-03-10 00:03:42.717495 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 4s [id=93211e5e-3e62-41bd-a64f-6cea18818b0c/e4712c11-e6a0-4829-954c-3e21e73d266a] 2026-03-10 00:03:48.816008 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 10s [id=93211e5e-3e62-41bd-a64f-6cea18818b0c/8f76f090-a1e0-42c3-8072-1f51d4df9a8c] 2026-03-10 00:03:48.826952 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 10s [id=d45542dd-9abb-4e97-b885-9ca8367915c3/525599b5-6362-4aac-a0b3-94bd4cb39972] 2026-03-10 00:03:48.906127 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 10s [id=f7348ba5-908a-4db4-a856-6d4500273c14/b94fdc5f-2b9b-46a8-a60f-74e41f269a0d] 2026-03-10 00:03:49.222254 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-10 00:03:59.222571 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-10 00:03:59.998984 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=0a75d875-ffd2-43fe-ac6f-e6c83fb9a63e] 2026-03-10 00:04:00.024481 | orchestrator | 2026-03-10 00:04:00.024559 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-10 00:04:00.024608 | orchestrator | 2026-03-10 00:04:00.024621 | orchestrator | Outputs: 2026-03-10 00:04:00.024631 | orchestrator | 2026-03-10 00:04:00.024664 | orchestrator | manager_address = 2026-03-10 00:04:00.024676 | orchestrator | private_key = 2026-03-10 00:04:00.264583 | orchestrator | ok: Runtime: 0:01:34.409449 2026-03-10 00:04:00.301006 | 2026-03-10 00:04:00.301174 | TASK [Create infrastructure (stable)] 2026-03-10 00:04:00.836331 | orchestrator | skipping: Conditional result was False 2026-03-10 00:04:00.854393 | 2026-03-10 00:04:00.854575 | TASK [Fetch manager address] 2026-03-10 00:04:01.332276 | orchestrator | ok 2026-03-10 00:04:01.343611 | 2026-03-10 00:04:01.343903 | TASK [Set manager_host address] 2026-03-10 00:04:01.423052 | orchestrator | ok 2026-03-10 00:04:01.432702 | 2026-03-10 00:04:01.432906 | LOOP [Update ansible collections] 2026-03-10 00:04:04.412509 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-10 00:04:04.413357 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-10 00:04:04.413440 | orchestrator | Starting galaxy collection install process 2026-03-10 00:04:04.413566 | orchestrator | Process install dependency map 2026-03-10 00:04:04.413615 | orchestrator | Starting collection install process 2026-03-10 00:04:04.413651 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons' 2026-03-10 00:04:04.413694 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons 2026-03-10 00:04:04.413763 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-10 00:04:04.413856 | orchestrator | ok: Item: commons Runtime: 0:00:02.642681 2026-03-10 00:04:05.375336 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-10 00:04:05.375548 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-10 00:04:05.375625 | orchestrator | Starting galaxy collection install process 2026-03-10 00:04:05.375683 | orchestrator | Process install dependency map 2026-03-10 00:04:05.375758 | orchestrator | Starting collection install process 2026-03-10 00:04:05.375812 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services' 2026-03-10 00:04:05.375865 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/services 2026-03-10 00:04:05.375913 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-10 00:04:05.375988 | orchestrator | ok: Item: services Runtime: 0:00:00.660609 2026-03-10 00:04:05.394385 | 2026-03-10 00:04:05.394552 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-10 00:04:16.023840 | orchestrator | ok 2026-03-10 00:04:16.032172 | 2026-03-10 00:04:16.032308 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-10 00:05:16.071881 | orchestrator | ok 2026-03-10 00:05:16.080093 | 2026-03-10 00:05:16.080232 | TASK [Fetch manager ssh hostkey] 2026-03-10 00:05:17.661324 | orchestrator | Output suppressed because no_log was given 2026-03-10 00:05:17.676163 | 2026-03-10 00:05:17.676343 | TASK [Get ssh keypair from terraform environment] 2026-03-10 00:05:18.226075 | orchestrator | ok: Runtime: 0:00:00.005849 2026-03-10 00:05:18.236738 | 2026-03-10 00:05:18.236929 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-10 00:05:18.285659 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-10 00:05:18.295422 | 2026-03-10 00:05:18.295561 | TASK [Run manager part 0] 2026-03-10 00:05:19.244019 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-10 00:05:19.290944 | orchestrator | 2026-03-10 00:05:19.291050 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-10 00:05:19.291070 | orchestrator | 2026-03-10 00:05:19.291098 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-10 00:05:21.282325 | orchestrator | ok: [testbed-manager] 2026-03-10 00:05:21.282422 | orchestrator | 2026-03-10 00:05:21.282455 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-10 00:05:21.282474 | orchestrator | 2026-03-10 00:05:21.282491 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:05:23.286702 | orchestrator | ok: [testbed-manager] 2026-03-10 00:05:23.286806 | orchestrator | 2026-03-10 00:05:23.286819 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-10 00:05:23.973824 | orchestrator | ok: [testbed-manager] 2026-03-10 00:05:23.973887 | orchestrator | 2026-03-10 00:05:23.973896 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-10 00:05:24.025915 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:05:24.025967 | orchestrator | 2026-03-10 00:05:24.025978 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-10 00:05:24.065237 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:05:24.065299 | orchestrator | 2026-03-10 00:05:24.065311 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-10 00:05:24.102566 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:05:24.102653 | orchestrator | 2026-03-10 00:05:24.102669 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-10 00:05:24.130120 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:05:24.130193 | orchestrator | 2026-03-10 00:05:24.130206 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-10 00:05:24.155977 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:05:24.156048 | orchestrator | 2026-03-10 00:05:24.156057 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-10 00:05:24.186865 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:05:24.186916 | orchestrator | 2026-03-10 00:05:24.186924 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-10 00:05:24.229600 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:05:24.229667 | orchestrator | 2026-03-10 00:05:24.229679 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-10 00:05:25.004814 | orchestrator | changed: [testbed-manager] 2026-03-10 00:05:25.004879 | orchestrator | 2026-03-10 00:05:25.004891 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-10 00:08:19.384858 | orchestrator | changed: [testbed-manager] 2026-03-10 00:08:19.384968 | orchestrator | 2026-03-10 00:08:19.384990 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-10 00:09:57.223742 | orchestrator | changed: [testbed-manager] 2026-03-10 00:09:57.223826 | orchestrator | 2026-03-10 00:09:57.223841 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-10 00:10:19.218808 | orchestrator | changed: [testbed-manager] 2026-03-10 00:10:19.218849 | orchestrator | 2026-03-10 00:10:19.218858 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-10 00:10:28.634626 | orchestrator | changed: [testbed-manager] 2026-03-10 00:10:28.634668 | orchestrator | 2026-03-10 00:10:28.634676 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-10 00:10:28.684781 | orchestrator | ok: [testbed-manager] 2026-03-10 00:10:28.684818 | orchestrator | 2026-03-10 00:10:28.684826 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-10 00:10:29.517746 | orchestrator | ok: [testbed-manager] 2026-03-10 00:10:29.517786 | orchestrator | 2026-03-10 00:10:29.517795 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-10 00:10:30.286071 | orchestrator | changed: [testbed-manager] 2026-03-10 00:10:30.286149 | orchestrator | 2026-03-10 00:10:30.286156 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-10 00:10:36.756010 | orchestrator | changed: [testbed-manager] 2026-03-10 00:10:36.756053 | orchestrator | 2026-03-10 00:10:36.756075 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-10 00:10:42.863780 | orchestrator | changed: [testbed-manager] 2026-03-10 00:10:42.863874 | orchestrator | 2026-03-10 00:10:42.863893 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-10 00:10:45.687215 | orchestrator | changed: [testbed-manager] 2026-03-10 00:10:45.687347 | orchestrator | 2026-03-10 00:10:45.687376 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-10 00:10:47.496688 | orchestrator | changed: [testbed-manager] 2026-03-10 00:10:47.496739 | orchestrator | 2026-03-10 00:10:47.496750 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-10 00:10:48.636269 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-10 00:10:48.636343 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-10 00:10:48.636358 | orchestrator | 2026-03-10 00:10:48.636372 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-10 00:10:48.684165 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-10 00:10:48.684269 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-10 00:10:48.684298 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-10 00:10:48.684317 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-10 00:10:52.153109 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-10 00:10:52.153169 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-10 00:10:52.153185 | orchestrator | 2026-03-10 00:10:52.153198 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-10 00:10:52.750469 | orchestrator | changed: [testbed-manager] 2026-03-10 00:10:52.750559 | orchestrator | 2026-03-10 00:10:52.750576 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-10 00:14:13.644722 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-10 00:14:13.644940 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-10 00:14:13.645016 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-10 00:14:13.645033 | orchestrator | 2026-03-10 00:14:13.645046 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-10 00:14:16.060478 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-10 00:14:16.060579 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-10 00:14:16.060594 | orchestrator | 2026-03-10 00:14:16.060608 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-10 00:14:16.060620 | orchestrator | 2026-03-10 00:14:16.060631 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:14:17.477152 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:17.477239 | orchestrator | 2026-03-10 00:14:17.477267 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-10 00:14:17.528398 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:17.528440 | orchestrator | 2026-03-10 00:14:17.528450 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-10 00:14:17.592226 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:17.592342 | orchestrator | 2026-03-10 00:14:17.592360 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-10 00:14:18.424037 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:18.424081 | orchestrator | 2026-03-10 00:14:18.424091 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-10 00:14:19.160855 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:19.161019 | orchestrator | 2026-03-10 00:14:19.161030 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-10 00:14:20.589029 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-10 00:14:20.589129 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-10 00:14:20.589145 | orchestrator | 2026-03-10 00:14:20.589178 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-10 00:14:22.044031 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:22.044137 | orchestrator | 2026-03-10 00:14:22.044154 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-10 00:14:23.865068 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:14:23.865176 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-10 00:14:23.865192 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:14:23.865204 | orchestrator | 2026-03-10 00:14:23.865218 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-10 00:14:23.925162 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:14:23.925209 | orchestrator | 2026-03-10 00:14:23.925219 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-10 00:14:23.996815 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:14:23.996858 | orchestrator | 2026-03-10 00:14:23.996870 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-10 00:14:24.618519 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:24.618571 | orchestrator | 2026-03-10 00:14:24.618581 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-10 00:14:24.683088 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:14:24.683136 | orchestrator | 2026-03-10 00:14:24.683145 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-10 00:14:25.579944 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-10 00:14:25.580035 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:25.580053 | orchestrator | 2026-03-10 00:14:25.580065 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-10 00:14:25.627873 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:14:25.627908 | orchestrator | 2026-03-10 00:14:25.627914 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-10 00:14:25.668729 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:14:25.668764 | orchestrator | 2026-03-10 00:14:25.668770 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-10 00:14:25.702123 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:14:25.702157 | orchestrator | 2026-03-10 00:14:25.702164 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-10 00:14:25.770131 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:14:25.770164 | orchestrator | 2026-03-10 00:14:25.770170 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-10 00:14:26.479754 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:26.479806 | orchestrator | 2026-03-10 00:14:26.479816 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-10 00:14:26.479822 | orchestrator | 2026-03-10 00:14:26.479829 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:14:27.894274 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:27.894314 | orchestrator | 2026-03-10 00:14:27.894321 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-10 00:14:28.876847 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:28.876887 | orchestrator | 2026-03-10 00:14:28.876893 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:14:28.876900 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-10 00:14:28.876904 | orchestrator | 2026-03-10 00:14:29.160957 | orchestrator | ok: Runtime: 0:09:10.384984 2026-03-10 00:14:29.170286 | 2026-03-10 00:14:29.170393 | TASK [Point out that the log in on the manager is now possible] 2026-03-10 00:14:29.201420 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-10 00:14:29.209219 | 2026-03-10 00:14:29.209331 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-10 00:14:29.249312 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-10 00:14:29.258742 | 2026-03-10 00:14:29.258896 | TASK [Run manager part 1 + 2] 2026-03-10 00:14:30.104772 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-10 00:14:30.160486 | orchestrator | 2026-03-10 00:14:30.160569 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-10 00:14:30.160593 | orchestrator | 2026-03-10 00:14:30.160624 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:14:33.094613 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:33.094711 | orchestrator | 2026-03-10 00:14:33.094772 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-10 00:14:33.128940 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:14:33.129052 | orchestrator | 2026-03-10 00:14:33.129071 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-10 00:14:33.167370 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:33.167454 | orchestrator | 2026-03-10 00:14:33.167478 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-10 00:14:33.216606 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:33.216661 | orchestrator | 2026-03-10 00:14:33.216669 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-10 00:14:33.286737 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:33.286791 | orchestrator | 2026-03-10 00:14:33.286799 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-10 00:14:33.381836 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:33.381899 | orchestrator | 2026-03-10 00:14:33.381912 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-10 00:14:33.420871 | orchestrator | included: /home/zuul-testbed06/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-10 00:14:33.420932 | orchestrator | 2026-03-10 00:14:33.420939 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-10 00:14:34.119696 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:34.119755 | orchestrator | 2026-03-10 00:14:34.119767 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-10 00:14:34.173188 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:14:34.173290 | orchestrator | 2026-03-10 00:14:34.173299 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-10 00:14:35.555172 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:35.555348 | orchestrator | 2026-03-10 00:14:35.555359 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-10 00:14:36.146141 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:36.146230 | orchestrator | 2026-03-10 00:14:36.146246 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-10 00:14:37.365834 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:37.366009 | orchestrator | 2026-03-10 00:14:37.366093 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-10 00:14:53.439284 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:53.439328 | orchestrator | 2026-03-10 00:14:53.439354 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-10 00:14:54.142163 | orchestrator | ok: [testbed-manager] 2026-03-10 00:14:54.142277 | orchestrator | 2026-03-10 00:14:54.142308 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-10 00:14:54.197940 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:14:54.198075 | orchestrator | 2026-03-10 00:14:54.198091 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-10 00:14:55.185258 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:55.185303 | orchestrator | 2026-03-10 00:14:55.185312 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-10 00:14:56.201191 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:56.201276 | orchestrator | 2026-03-10 00:14:56.201291 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-10 00:14:56.787731 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:56.787815 | orchestrator | 2026-03-10 00:14:56.787828 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-10 00:14:56.826222 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-10 00:14:56.826297 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-10 00:14:56.826304 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-10 00:14:56.826308 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-10 00:14:58.813195 | orchestrator | changed: [testbed-manager] 2026-03-10 00:14:58.813270 | orchestrator | 2026-03-10 00:14:58.813280 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-10 00:15:09.538233 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-10 00:15:09.538307 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-10 00:15:09.538319 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-10 00:15:09.538328 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-10 00:15:09.538344 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-10 00:15:09.538352 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-10 00:15:09.538360 | orchestrator | 2026-03-10 00:15:09.538370 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-10 00:15:10.614671 | orchestrator | changed: [testbed-manager] 2026-03-10 00:15:10.614752 | orchestrator | 2026-03-10 00:15:10.614765 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-10 00:15:10.659202 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:15:10.659284 | orchestrator | 2026-03-10 00:15:10.659299 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-10 00:15:14.747322 | orchestrator | changed: [testbed-manager] 2026-03-10 00:15:14.747362 | orchestrator | 2026-03-10 00:15:14.747370 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-10 00:15:14.785094 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:15:14.785130 | orchestrator | 2026-03-10 00:15:14.785136 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-10 00:16:59.915235 | orchestrator | changed: [testbed-manager] 2026-03-10 00:16:59.915369 | orchestrator | 2026-03-10 00:16:59.915390 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-10 00:17:01.196751 | orchestrator | ok: [testbed-manager] 2026-03-10 00:17:01.196787 | orchestrator | 2026-03-10 00:17:01.196793 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:17:01.196800 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-10 00:17:01.196804 | orchestrator | 2026-03-10 00:17:01.395807 | orchestrator | ok: Runtime: 0:02:31.753168 2026-03-10 00:17:01.413788 | 2026-03-10 00:17:01.413938 | TASK [Reboot manager] 2026-03-10 00:17:02.949830 | orchestrator | ok: Runtime: 0:00:01.017349 2026-03-10 00:17:02.966420 | 2026-03-10 00:17:02.966585 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-10 00:17:19.578136 | orchestrator | ok 2026-03-10 00:17:19.589655 | 2026-03-10 00:17:19.589799 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-10 00:18:19.635988 | orchestrator | ok 2026-03-10 00:18:19.646759 | 2026-03-10 00:18:19.646931 | TASK [Deploy manager + bootstrap nodes] 2026-03-10 00:18:22.347218 | orchestrator | 2026-03-10 00:18:22.347403 | orchestrator | # DEPLOY MANAGER 2026-03-10 00:18:22.347430 | orchestrator | 2026-03-10 00:18:22.347484 | orchestrator | + set -e 2026-03-10 00:18:22.347500 | orchestrator | + echo 2026-03-10 00:18:22.347514 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-10 00:18:22.347532 | orchestrator | + echo 2026-03-10 00:18:22.347580 | orchestrator | + cat /opt/manager-vars.sh 2026-03-10 00:18:22.352481 | orchestrator | export NUMBER_OF_NODES=6 2026-03-10 00:18:22.352518 | orchestrator | 2026-03-10 00:18:22.352531 | orchestrator | export CEPH_VERSION=reef 2026-03-10 00:18:22.352544 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-10 00:18:22.352556 | orchestrator | export MANAGER_VERSION=latest 2026-03-10 00:18:22.352580 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-10 00:18:22.352591 | orchestrator | 2026-03-10 00:18:22.352610 | orchestrator | export ARA=false 2026-03-10 00:18:22.352622 | orchestrator | export DEPLOY_MODE=manager 2026-03-10 00:18:22.352639 | orchestrator | export TEMPEST=true 2026-03-10 00:18:22.352651 | orchestrator | export IS_ZUUL=true 2026-03-10 00:18:22.352662 | orchestrator | 2026-03-10 00:18:22.352680 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2026-03-10 00:18:22.352691 | orchestrator | export EXTERNAL_API=false 2026-03-10 00:18:22.352702 | orchestrator | 2026-03-10 00:18:22.352713 | orchestrator | export IMAGE_USER=ubuntu 2026-03-10 00:18:22.352728 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-10 00:18:22.352739 | orchestrator | 2026-03-10 00:18:22.352750 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-10 00:18:22.352826 | orchestrator | 2026-03-10 00:18:22.352857 | orchestrator | + echo 2026-03-10 00:18:22.352871 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-10 00:18:22.354208 | orchestrator | ++ export INTERACTIVE=false 2026-03-10 00:18:22.354237 | orchestrator | ++ INTERACTIVE=false 2026-03-10 00:18:22.354249 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-10 00:18:22.354261 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-10 00:18:22.354272 | orchestrator | + source /opt/manager-vars.sh 2026-03-10 00:18:22.354284 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-10 00:18:22.354295 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-10 00:18:22.354305 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-10 00:18:22.354316 | orchestrator | ++ CEPH_VERSION=reef 2026-03-10 00:18:22.354327 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-10 00:18:22.354338 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-10 00:18:22.354348 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-10 00:18:22.354359 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-10 00:18:22.354406 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-10 00:18:22.354430 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-10 00:18:22.354441 | orchestrator | ++ export ARA=false 2026-03-10 00:18:22.354452 | orchestrator | ++ ARA=false 2026-03-10 00:18:22.354463 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-10 00:18:22.354474 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-10 00:18:22.354609 | orchestrator | ++ export TEMPEST=true 2026-03-10 00:18:22.354624 | orchestrator | ++ TEMPEST=true 2026-03-10 00:18:22.354635 | orchestrator | ++ export IS_ZUUL=true 2026-03-10 00:18:22.354646 | orchestrator | ++ IS_ZUUL=true 2026-03-10 00:18:22.354657 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2026-03-10 00:18:22.354668 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2026-03-10 00:18:22.354678 | orchestrator | ++ export EXTERNAL_API=false 2026-03-10 00:18:22.354689 | orchestrator | ++ EXTERNAL_API=false 2026-03-10 00:18:22.354700 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-10 00:18:22.354711 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-10 00:18:22.354721 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-10 00:18:22.354732 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-10 00:18:22.354743 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-10 00:18:22.354754 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-10 00:18:22.354765 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-10 00:18:22.430073 | orchestrator | + docker version 2026-03-10 00:18:22.544400 | orchestrator | Client: Docker Engine - Community 2026-03-10 00:18:22.544477 | orchestrator | Version: 27.5.1 2026-03-10 00:18:22.544486 | orchestrator | API version: 1.47 2026-03-10 00:18:22.544494 | orchestrator | Go version: go1.22.11 2026-03-10 00:18:22.544501 | orchestrator | Git commit: 9f9e405 2026-03-10 00:18:22.544507 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-10 00:18:22.544515 | orchestrator | OS/Arch: linux/amd64 2026-03-10 00:18:22.544521 | orchestrator | Context: default 2026-03-10 00:18:22.544526 | orchestrator | 2026-03-10 00:18:22.544533 | orchestrator | Server: Docker Engine - Community 2026-03-10 00:18:22.544539 | orchestrator | Engine: 2026-03-10 00:18:22.544545 | orchestrator | Version: 27.5.1 2026-03-10 00:18:22.544551 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-10 00:18:22.544580 | orchestrator | Go version: go1.22.11 2026-03-10 00:18:22.544586 | orchestrator | Git commit: 4c9b3b0 2026-03-10 00:18:22.544592 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-10 00:18:22.544598 | orchestrator | OS/Arch: linux/amd64 2026-03-10 00:18:22.544605 | orchestrator | Experimental: false 2026-03-10 00:18:22.544614 | orchestrator | containerd: 2026-03-10 00:18:22.544627 | orchestrator | Version: v2.2.1 2026-03-10 00:18:22.544641 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-10 00:18:22.544650 | orchestrator | runc: 2026-03-10 00:18:22.544659 | orchestrator | Version: 1.3.4 2026-03-10 00:18:22.544669 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-10 00:18:22.544679 | orchestrator | docker-init: 2026-03-10 00:18:22.544688 | orchestrator | Version: 0.19.0 2026-03-10 00:18:22.544698 | orchestrator | GitCommit: de40ad0 2026-03-10 00:18:22.548024 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-10 00:18:22.558077 | orchestrator | + set -e 2026-03-10 00:18:22.558421 | orchestrator | + source /opt/manager-vars.sh 2026-03-10 00:18:22.558436 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-10 00:18:22.558444 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-10 00:18:22.558449 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-10 00:18:22.558454 | orchestrator | ++ CEPH_VERSION=reef 2026-03-10 00:18:22.558459 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-10 00:18:22.558466 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-10 00:18:22.558470 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-10 00:18:22.558476 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-10 00:18:22.558481 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-10 00:18:22.558486 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-10 00:18:22.558491 | orchestrator | ++ export ARA=false 2026-03-10 00:18:22.558496 | orchestrator | ++ ARA=false 2026-03-10 00:18:22.558501 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-10 00:18:22.558506 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-10 00:18:22.558511 | orchestrator | ++ export TEMPEST=true 2026-03-10 00:18:22.558516 | orchestrator | ++ TEMPEST=true 2026-03-10 00:18:22.558521 | orchestrator | ++ export IS_ZUUL=true 2026-03-10 00:18:22.558525 | orchestrator | ++ IS_ZUUL=true 2026-03-10 00:18:22.558530 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2026-03-10 00:18:22.558535 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2026-03-10 00:18:22.558540 | orchestrator | ++ export EXTERNAL_API=false 2026-03-10 00:18:22.558544 | orchestrator | ++ EXTERNAL_API=false 2026-03-10 00:18:22.558549 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-10 00:18:22.558554 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-10 00:18:22.558558 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-10 00:18:22.558563 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-10 00:18:22.558568 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-10 00:18:22.558573 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-10 00:18:22.558578 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-10 00:18:22.558582 | orchestrator | ++ export INTERACTIVE=false 2026-03-10 00:18:22.558587 | orchestrator | ++ INTERACTIVE=false 2026-03-10 00:18:22.558592 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-10 00:18:22.558600 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-10 00:18:22.558605 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-10 00:18:22.558610 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-10 00:18:22.558614 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-10 00:18:22.565698 | orchestrator | + set -e 2026-03-10 00:18:22.565770 | orchestrator | + VERSION=reef 2026-03-10 00:18:22.566803 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-10 00:18:22.573681 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-10 00:18:22.573731 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-10 00:18:22.579859 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-10 00:18:22.587215 | orchestrator | + set -e 2026-03-10 00:18:22.587273 | orchestrator | + VERSION=2024.2 2026-03-10 00:18:22.587773 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-10 00:18:22.592308 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-10 00:18:22.592364 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-10 00:18:22.598105 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-10 00:18:22.599054 | orchestrator | ++ semver latest 7.0.0 2026-03-10 00:18:22.655308 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-10 00:18:22.655409 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-10 00:18:22.655428 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-10 00:18:22.655558 | orchestrator | ++ semver latest 10.0.0-0 2026-03-10 00:18:22.698371 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-10 00:18:22.698535 | orchestrator | ++ semver 2024.2 2025.1 2026-03-10 00:18:22.749370 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-10 00:18:22.749458 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-10 00:18:22.842496 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-10 00:18:22.844042 | orchestrator | + source /opt/venv/bin/activate 2026-03-10 00:18:22.845202 | orchestrator | ++ deactivate nondestructive 2026-03-10 00:18:22.845232 | orchestrator | ++ '[' -n '' ']' 2026-03-10 00:18:22.845244 | orchestrator | ++ '[' -n '' ']' 2026-03-10 00:18:22.845255 | orchestrator | ++ hash -r 2026-03-10 00:18:22.845266 | orchestrator | ++ '[' -n '' ']' 2026-03-10 00:18:22.845277 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-10 00:18:22.845358 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-10 00:18:22.845376 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-10 00:18:22.845675 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-10 00:18:22.845695 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-10 00:18:22.845706 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-10 00:18:22.845717 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-10 00:18:22.845729 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-10 00:18:22.845741 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-10 00:18:22.845872 | orchestrator | ++ export PATH 2026-03-10 00:18:22.845887 | orchestrator | ++ '[' -n '' ']' 2026-03-10 00:18:22.845939 | orchestrator | ++ '[' -z '' ']' 2026-03-10 00:18:22.845960 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-10 00:18:22.845978 | orchestrator | ++ PS1='(venv) ' 2026-03-10 00:18:22.845995 | orchestrator | ++ export PS1 2026-03-10 00:18:22.846006 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-10 00:18:22.846066 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-10 00:18:22.846079 | orchestrator | ++ hash -r 2026-03-10 00:18:22.846110 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-10 00:18:24.171723 | orchestrator | 2026-03-10 00:18:24.171858 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-10 00:18:24.171880 | orchestrator | 2026-03-10 00:18:24.171930 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-10 00:18:24.777271 | orchestrator | ok: [testbed-manager] 2026-03-10 00:18:24.777375 | orchestrator | 2026-03-10 00:18:24.777391 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-10 00:18:25.745062 | orchestrator | changed: [testbed-manager] 2026-03-10 00:18:25.745165 | orchestrator | 2026-03-10 00:18:25.745182 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-10 00:18:25.745196 | orchestrator | 2026-03-10 00:18:25.745207 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:18:28.189545 | orchestrator | ok: [testbed-manager] 2026-03-10 00:18:28.189649 | orchestrator | 2026-03-10 00:18:28.189666 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-10 00:18:28.253265 | orchestrator | ok: [testbed-manager] 2026-03-10 00:18:28.253355 | orchestrator | 2026-03-10 00:18:28.253370 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-10 00:18:28.729204 | orchestrator | changed: [testbed-manager] 2026-03-10 00:18:28.729306 | orchestrator | 2026-03-10 00:18:28.729322 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-10 00:18:28.765365 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:18:28.765475 | orchestrator | 2026-03-10 00:18:28.765491 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-10 00:18:29.131215 | orchestrator | changed: [testbed-manager] 2026-03-10 00:18:29.132299 | orchestrator | 2026-03-10 00:18:29.132342 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-10 00:18:29.476006 | orchestrator | ok: [testbed-manager] 2026-03-10 00:18:29.476133 | orchestrator | 2026-03-10 00:18:29.476163 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-10 00:18:29.586275 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:18:29.586376 | orchestrator | 2026-03-10 00:18:29.586393 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-10 00:18:29.586406 | orchestrator | 2026-03-10 00:18:29.586417 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:18:31.375421 | orchestrator | ok: [testbed-manager] 2026-03-10 00:18:31.375527 | orchestrator | 2026-03-10 00:18:31.375544 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-10 00:18:31.498982 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-10 00:18:31.499095 | orchestrator | 2026-03-10 00:18:31.499121 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-10 00:18:31.566407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-10 00:18:31.566494 | orchestrator | 2026-03-10 00:18:31.566507 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-10 00:18:32.691082 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-10 00:18:32.691188 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-10 00:18:32.691204 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-10 00:18:32.691217 | orchestrator | 2026-03-10 00:18:32.691229 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-10 00:18:34.574479 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-10 00:18:34.574613 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-10 00:18:34.574628 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-10 00:18:34.574641 | orchestrator | 2026-03-10 00:18:34.574654 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-10 00:18:35.237657 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-10 00:18:35.237748 | orchestrator | changed: [testbed-manager] 2026-03-10 00:18:35.237763 | orchestrator | 2026-03-10 00:18:35.237775 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-10 00:18:35.850177 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-10 00:18:35.850266 | orchestrator | changed: [testbed-manager] 2026-03-10 00:18:35.850281 | orchestrator | 2026-03-10 00:18:35.850294 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-10 00:18:35.904530 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:18:35.904616 | orchestrator | 2026-03-10 00:18:35.904632 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-10 00:18:36.227673 | orchestrator | ok: [testbed-manager] 2026-03-10 00:18:36.227755 | orchestrator | 2026-03-10 00:18:36.227771 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-10 00:18:36.287718 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-10 00:18:36.287809 | orchestrator | 2026-03-10 00:18:36.287826 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-10 00:18:37.298106 | orchestrator | changed: [testbed-manager] 2026-03-10 00:18:37.298213 | orchestrator | 2026-03-10 00:18:37.298237 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-10 00:18:38.030466 | orchestrator | changed: [testbed-manager] 2026-03-10 00:18:38.030534 | orchestrator | 2026-03-10 00:18:38.030548 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-10 00:18:48.523979 | orchestrator | changed: [testbed-manager] 2026-03-10 00:18:48.524075 | orchestrator | 2026-03-10 00:18:48.524114 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-10 00:18:48.575093 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:18:48.575199 | orchestrator | 2026-03-10 00:18:48.575219 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-10 00:18:48.575236 | orchestrator | 2026-03-10 00:18:48.575252 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:18:51.476861 | orchestrator | ok: [testbed-manager] 2026-03-10 00:18:51.477625 | orchestrator | 2026-03-10 00:18:51.477684 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-10 00:18:51.581609 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-10 00:18:51.581685 | orchestrator | 2026-03-10 00:18:51.581700 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-10 00:18:51.649167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-10 00:18:51.649252 | orchestrator | 2026-03-10 00:18:51.649266 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-10 00:18:53.819092 | orchestrator | ok: [testbed-manager] 2026-03-10 00:18:53.819202 | orchestrator | 2026-03-10 00:18:53.819228 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-10 00:18:53.872047 | orchestrator | ok: [testbed-manager] 2026-03-10 00:18:53.872135 | orchestrator | 2026-03-10 00:18:53.872152 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-10 00:18:53.992563 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-10 00:18:53.992643 | orchestrator | 2026-03-10 00:18:53.992659 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-10 00:18:56.680617 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-10 00:18:56.680730 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-10 00:18:56.680754 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-10 00:18:56.680766 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-10 00:18:56.680777 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-10 00:18:56.680788 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-10 00:18:56.680816 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-10 00:18:56.680827 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-10 00:18:56.680839 | orchestrator | 2026-03-10 00:18:56.680851 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-10 00:18:57.216533 | orchestrator | changed: [testbed-manager] 2026-03-10 00:18:57.216604 | orchestrator | 2026-03-10 00:18:57.216616 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-10 00:18:57.800398 | orchestrator | changed: [testbed-manager] 2026-03-10 00:18:57.800484 | orchestrator | 2026-03-10 00:18:57.800500 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-10 00:18:57.877930 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-10 00:18:57.878011 | orchestrator | 2026-03-10 00:18:57.878078 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-10 00:18:59.002778 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-10 00:18:59.002866 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-10 00:18:59.002880 | orchestrator | 2026-03-10 00:18:59.002917 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-10 00:18:59.611010 | orchestrator | changed: [testbed-manager] 2026-03-10 00:18:59.611111 | orchestrator | 2026-03-10 00:18:59.611131 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-10 00:18:59.670985 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:18:59.671104 | orchestrator | 2026-03-10 00:18:59.671120 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-10 00:18:59.756649 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-10 00:18:59.756777 | orchestrator | 2026-03-10 00:18:59.756805 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-10 00:19:00.397779 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:00.397877 | orchestrator | 2026-03-10 00:19:00.397922 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-10 00:19:00.465778 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-10 00:19:00.466165 | orchestrator | 2026-03-10 00:19:00.466191 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-10 00:19:01.839335 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-10 00:19:01.839450 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-10 00:19:01.839475 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:01.839496 | orchestrator | 2026-03-10 00:19:01.839515 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-10 00:19:02.494781 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:02.494929 | orchestrator | 2026-03-10 00:19:02.494956 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-10 00:19:02.548596 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:19:02.548688 | orchestrator | 2026-03-10 00:19:02.548704 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-10 00:19:02.644483 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-10 00:19:02.644564 | orchestrator | 2026-03-10 00:19:02.644577 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-10 00:19:03.199936 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:03.200069 | orchestrator | 2026-03-10 00:19:03.200108 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-10 00:19:03.652789 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:03.652889 | orchestrator | 2026-03-10 00:19:03.652959 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-10 00:19:04.934931 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-10 00:19:04.935069 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-10 00:19:04.935087 | orchestrator | 2026-03-10 00:19:04.935100 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-10 00:19:05.641542 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:05.641640 | orchestrator | 2026-03-10 00:19:05.641657 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-10 00:19:06.022332 | orchestrator | ok: [testbed-manager] 2026-03-10 00:19:06.022458 | orchestrator | 2026-03-10 00:19:06.022486 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-10 00:19:06.384777 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:06.384872 | orchestrator | 2026-03-10 00:19:06.384887 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-10 00:19:06.441412 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:19:06.441498 | orchestrator | 2026-03-10 00:19:06.441510 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-10 00:19:06.512158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-10 00:19:06.512269 | orchestrator | 2026-03-10 00:19:06.512288 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-10 00:19:06.574355 | orchestrator | ok: [testbed-manager] 2026-03-10 00:19:06.574436 | orchestrator | 2026-03-10 00:19:06.574446 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-10 00:19:08.648384 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-10 00:19:08.648490 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-10 00:19:08.648505 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-10 00:19:08.648516 | orchestrator | 2026-03-10 00:19:08.648528 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-10 00:19:09.362834 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:09.362987 | orchestrator | 2026-03-10 00:19:09.363016 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-10 00:19:10.063122 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:10.063222 | orchestrator | 2026-03-10 00:19:10.063238 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-10 00:19:10.808536 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:10.808650 | orchestrator | 2026-03-10 00:19:10.808677 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-10 00:19:10.877583 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-10 00:19:10.877675 | orchestrator | 2026-03-10 00:19:10.877689 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-10 00:19:10.922443 | orchestrator | ok: [testbed-manager] 2026-03-10 00:19:10.922530 | orchestrator | 2026-03-10 00:19:10.922544 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-10 00:19:11.644724 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-10 00:19:11.644791 | orchestrator | 2026-03-10 00:19:11.644808 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-10 00:19:11.731371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-10 00:19:11.731453 | orchestrator | 2026-03-10 00:19:11.731465 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-10 00:19:12.458454 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:12.458571 | orchestrator | 2026-03-10 00:19:12.458591 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-10 00:19:13.078396 | orchestrator | ok: [testbed-manager] 2026-03-10 00:19:13.078491 | orchestrator | 2026-03-10 00:19:13.078503 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-10 00:19:13.143057 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:19:13.143138 | orchestrator | 2026-03-10 00:19:13.143150 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-10 00:19:13.195504 | orchestrator | ok: [testbed-manager] 2026-03-10 00:19:13.195631 | orchestrator | 2026-03-10 00:19:13.195660 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-10 00:19:14.041708 | orchestrator | changed: [testbed-manager] 2026-03-10 00:19:14.041789 | orchestrator | 2026-03-10 00:19:14.041798 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-10 00:20:23.543760 | orchestrator | changed: [testbed-manager] 2026-03-10 00:20:23.543869 | orchestrator | 2026-03-10 00:20:23.543885 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-10 00:20:24.589173 | orchestrator | ok: [testbed-manager] 2026-03-10 00:20:24.589276 | orchestrator | 2026-03-10 00:20:24.589293 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-10 00:20:24.649463 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:20:24.649575 | orchestrator | 2026-03-10 00:20:24.649598 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-10 00:20:27.179333 | orchestrator | changed: [testbed-manager] 2026-03-10 00:20:27.179438 | orchestrator | 2026-03-10 00:20:27.179454 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-10 00:20:27.276327 | orchestrator | ok: [testbed-manager] 2026-03-10 00:20:27.276419 | orchestrator | 2026-03-10 00:20:27.276457 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-10 00:20:27.276471 | orchestrator | 2026-03-10 00:20:27.276483 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-10 00:20:27.332608 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:20:27.332732 | orchestrator | 2026-03-10 00:20:27.332765 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-10 00:21:27.383646 | orchestrator | Pausing for 60 seconds 2026-03-10 00:21:27.383750 | orchestrator | changed: [testbed-manager] 2026-03-10 00:21:27.383764 | orchestrator | 2026-03-10 00:21:27.383776 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-10 00:21:31.031873 | orchestrator | changed: [testbed-manager] 2026-03-10 00:21:31.031982 | orchestrator | 2026-03-10 00:21:31.031991 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-10 00:22:33.137198 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-10 00:22:33.137313 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-10 00:22:33.137329 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (48 retries left). 2026-03-10 00:22:33.137369 | orchestrator | changed: [testbed-manager] 2026-03-10 00:22:33.137381 | orchestrator | 2026-03-10 00:22:33.137393 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-10 00:22:44.439327 | orchestrator | changed: [testbed-manager] 2026-03-10 00:22:44.439454 | orchestrator | 2026-03-10 00:22:44.439476 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-10 00:22:44.518698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-10 00:22:44.518796 | orchestrator | 2026-03-10 00:22:44.518812 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-10 00:22:44.518825 | orchestrator | 2026-03-10 00:22:44.518837 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-10 00:22:44.571812 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:22:44.571964 | orchestrator | 2026-03-10 00:22:44.571989 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-10 00:22:44.658289 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-10 00:22:44.658392 | orchestrator | 2026-03-10 00:22:44.658408 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-10 00:22:45.474874 | orchestrator | changed: [testbed-manager] 2026-03-10 00:22:45.475018 | orchestrator | 2026-03-10 00:22:45.475034 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-10 00:22:48.872439 | orchestrator | ok: [testbed-manager] 2026-03-10 00:22:48.872563 | orchestrator | 2026-03-10 00:22:48.872581 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-10 00:22:48.952217 | orchestrator | ok: [testbed-manager] => { 2026-03-10 00:22:48.952309 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-10 00:22:48.952322 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-10 00:22:48.952333 | orchestrator | "Checking running containers against expected versions...", 2026-03-10 00:22:48.952343 | orchestrator | "", 2026-03-10 00:22:48.952353 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-10 00:22:48.952363 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-10 00:22:48.952384 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.952393 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-10 00:22:48.952402 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.952411 | orchestrator | "", 2026-03-10 00:22:48.952420 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-10 00:22:48.952430 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-10 00:22:48.952438 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.952447 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-10 00:22:48.952456 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.952465 | orchestrator | "", 2026-03-10 00:22:48.952474 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-10 00:22:48.952482 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-10 00:22:48.952491 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.952500 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-10 00:22:48.952509 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.952518 | orchestrator | "", 2026-03-10 00:22:48.952527 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-10 00:22:48.952536 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-10 00:22:48.952545 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.952554 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-10 00:22:48.952563 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.952572 | orchestrator | "", 2026-03-10 00:22:48.952580 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-10 00:22:48.952609 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-10 00:22:48.952619 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.952627 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-10 00:22:48.952636 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.952645 | orchestrator | "", 2026-03-10 00:22:48.952654 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-10 00:22:48.952663 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-10 00:22:48.952671 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.952680 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-10 00:22:48.952689 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.952698 | orchestrator | "", 2026-03-10 00:22:48.952706 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-10 00:22:48.952715 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-10 00:22:48.952724 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.952733 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-10 00:22:48.952742 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.952751 | orchestrator | "", 2026-03-10 00:22:48.952759 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-10 00:22:48.952768 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-10 00:22:48.952777 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.952785 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-10 00:22:48.952802 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.952811 | orchestrator | "", 2026-03-10 00:22:48.952819 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-10 00:22:48.952832 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-10 00:22:48.952841 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.952851 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-10 00:22:48.952860 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.952869 | orchestrator | "", 2026-03-10 00:22:48.952878 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-10 00:22:48.952886 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-10 00:22:48.952895 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.952904 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-10 00:22:48.952930 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.952939 | orchestrator | "", 2026-03-10 00:22:48.952947 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-10 00:22:48.952956 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-10 00:22:48.952965 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.952973 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-10 00:22:48.952982 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.952991 | orchestrator | "", 2026-03-10 00:22:48.953000 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-10 00:22:48.953008 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-10 00:22:48.953017 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.953026 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-10 00:22:48.953035 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.953043 | orchestrator | "", 2026-03-10 00:22:48.953052 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-10 00:22:48.953061 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-10 00:22:48.953069 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.953078 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-10 00:22:48.953087 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.953096 | orchestrator | "", 2026-03-10 00:22:48.953104 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-10 00:22:48.953113 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-10 00:22:48.953122 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.953137 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-10 00:22:48.953146 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.953155 | orchestrator | "", 2026-03-10 00:22:48.953163 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-10 00:22:48.953186 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-10 00:22:48.953196 | orchestrator | " Enabled: true", 2026-03-10 00:22:48.953204 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-10 00:22:48.953213 | orchestrator | " Status: ✅ MATCH", 2026-03-10 00:22:48.953221 | orchestrator | "", 2026-03-10 00:22:48.953230 | orchestrator | "=== Summary ===", 2026-03-10 00:22:48.953239 | orchestrator | "Errors (version mismatches): 0", 2026-03-10 00:22:48.953247 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-10 00:22:48.953256 | orchestrator | "", 2026-03-10 00:22:48.953265 | orchestrator | "✅ All running containers match expected versions!" 2026-03-10 00:22:48.953274 | orchestrator | ] 2026-03-10 00:22:48.953282 | orchestrator | } 2026-03-10 00:22:48.953292 | orchestrator | 2026-03-10 00:22:48.953301 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-10 00:22:49.010822 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:22:49.010975 | orchestrator | 2026-03-10 00:22:49.010993 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:22:49.011006 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-10 00:22:49.011018 | orchestrator | 2026-03-10 00:22:49.122004 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-10 00:22:49.122182 | orchestrator | + deactivate 2026-03-10 00:22:49.122202 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-10 00:22:49.122216 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-10 00:22:49.122227 | orchestrator | + export PATH 2026-03-10 00:22:49.122239 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-10 00:22:49.122250 | orchestrator | + '[' -n '' ']' 2026-03-10 00:22:49.122261 | orchestrator | + hash -r 2026-03-10 00:22:49.122272 | orchestrator | + '[' -n '' ']' 2026-03-10 00:22:49.122283 | orchestrator | + unset VIRTUAL_ENV 2026-03-10 00:22:49.122293 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-10 00:22:49.122304 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-10 00:22:49.122315 | orchestrator | + unset -f deactivate 2026-03-10 00:22:49.122327 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-10 00:22:49.133738 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-10 00:22:49.133820 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-10 00:22:49.133833 | orchestrator | + local max_attempts=60 2026-03-10 00:22:49.133845 | orchestrator | + local name=ceph-ansible 2026-03-10 00:22:49.133856 | orchestrator | + local attempt_num=1 2026-03-10 00:22:49.134564 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:22:49.167152 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:22:49.167237 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-10 00:22:49.167251 | orchestrator | + local max_attempts=60 2026-03-10 00:22:49.167263 | orchestrator | + local name=kolla-ansible 2026-03-10 00:22:49.167274 | orchestrator | + local attempt_num=1 2026-03-10 00:22:49.167756 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-10 00:22:49.206322 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:22:49.206418 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-10 00:22:49.206433 | orchestrator | + local max_attempts=60 2026-03-10 00:22:49.206445 | orchestrator | + local name=osism-ansible 2026-03-10 00:22:49.206456 | orchestrator | + local attempt_num=1 2026-03-10 00:22:49.206866 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-10 00:22:49.244508 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:22:49.244625 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-10 00:22:49.244650 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-10 00:22:49.963977 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-10 00:22:50.126899 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-10 00:22:50.127081 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible 2 minutes ago Up About a minute (healthy) 2026-03-10 00:22:50.127100 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible 2 minutes ago Up About a minute (healthy) 2026-03-10 00:22:50.127112 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api 2 minutes ago Up 2 minutes (healthy) 192.168.16.5:8000->8000/tcp 2026-03-10 00:22:50.127125 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server 2 minutes ago Up 2 minutes (healthy) 8000/tcp 2026-03-10 00:22:50.127136 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat 2 minutes ago Up 2 minutes (healthy) 2026-03-10 00:22:50.127147 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower 2 minutes ago Up 2 minutes (healthy) 2026-03-10 00:22:50.127159 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler 2 minutes ago Up About a minute (healthy) 2026-03-10 00:22:50.127186 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener 2 minutes ago Up 2 minutes (healthy) 2026-03-10 00:22:50.127198 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb 2 minutes ago Up 2 minutes (healthy) 3306/tcp 2026-03-10 00:22:50.127209 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack 2 minutes ago Up 2 minutes (healthy) 2026-03-10 00:22:50.127220 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis 2 minutes ago Up 2 minutes (healthy) 6379/tcp 2026-03-10 00:22:50.127231 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible 2 minutes ago Up About a minute (healthy) 2026-03-10 00:22:50.127242 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend 2 minutes ago Up 2 minutes 192.168.16.5:3000->3000/tcp 2026-03-10 00:22:50.127253 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes 2 minutes ago Up About a minute (healthy) 2026-03-10 00:22:50.127264 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient 2 minutes ago Up 2 minutes (healthy) 2026-03-10 00:22:50.132746 | orchestrator | ++ semver latest 7.0.0 2026-03-10 00:22:50.194442 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-10 00:22:50.194530 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-10 00:22:50.194544 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-10 00:22:50.198589 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-10 00:23:02.458548 | orchestrator | 2026-03-10 00:23:02 | INFO  | Prepare task for execution of resolvconf. 2026-03-10 00:23:02.675718 | orchestrator | 2026-03-10 00:23:02 | INFO  | Task bc6c7174-e973-4da3-aac2-d2fbbec6c365 (resolvconf) was prepared for execution. 2026-03-10 00:23:02.675879 | orchestrator | 2026-03-10 00:23:02 | INFO  | It takes a moment until task bc6c7174-e973-4da3-aac2-d2fbbec6c365 (resolvconf) has been started and output is visible here. 2026-03-10 00:23:16.805223 | orchestrator | 2026-03-10 00:23:16.805336 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-10 00:23:16.805353 | orchestrator | 2026-03-10 00:23:16.805365 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:23:16.805377 | orchestrator | Tuesday 10 March 2026 00:23:06 +0000 (0:00:00.149) 0:00:00.149 ********* 2026-03-10 00:23:16.805389 | orchestrator | ok: [testbed-manager] 2026-03-10 00:23:16.805401 | orchestrator | 2026-03-10 00:23:16.805413 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-10 00:23:16.805425 | orchestrator | Tuesday 10 March 2026 00:23:10 +0000 (0:00:03.810) 0:00:03.959 ********* 2026-03-10 00:23:16.805436 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:23:16.805449 | orchestrator | 2026-03-10 00:23:16.805460 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-10 00:23:16.805471 | orchestrator | Tuesday 10 March 2026 00:23:10 +0000 (0:00:00.073) 0:00:04.033 ********* 2026-03-10 00:23:16.805482 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-10 00:23:16.805495 | orchestrator | 2026-03-10 00:23:16.805506 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-10 00:23:16.805517 | orchestrator | Tuesday 10 March 2026 00:23:10 +0000 (0:00:00.091) 0:00:04.124 ********* 2026-03-10 00:23:16.805529 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-10 00:23:16.805540 | orchestrator | 2026-03-10 00:23:16.805562 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-10 00:23:16.805574 | orchestrator | Tuesday 10 March 2026 00:23:10 +0000 (0:00:00.066) 0:00:04.191 ********* 2026-03-10 00:23:16.805585 | orchestrator | ok: [testbed-manager] 2026-03-10 00:23:16.805596 | orchestrator | 2026-03-10 00:23:16.805608 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-10 00:23:16.805620 | orchestrator | Tuesday 10 March 2026 00:23:12 +0000 (0:00:01.129) 0:00:05.321 ********* 2026-03-10 00:23:16.805631 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:23:16.805642 | orchestrator | 2026-03-10 00:23:16.805653 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-10 00:23:16.805664 | orchestrator | Tuesday 10 March 2026 00:23:12 +0000 (0:00:00.064) 0:00:05.385 ********* 2026-03-10 00:23:16.805675 | orchestrator | ok: [testbed-manager] 2026-03-10 00:23:16.805686 | orchestrator | 2026-03-10 00:23:16.805697 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-10 00:23:16.805708 | orchestrator | Tuesday 10 March 2026 00:23:12 +0000 (0:00:00.542) 0:00:05.928 ********* 2026-03-10 00:23:16.805719 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:23:16.805731 | orchestrator | 2026-03-10 00:23:16.805742 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-10 00:23:16.805755 | orchestrator | Tuesday 10 March 2026 00:23:12 +0000 (0:00:00.093) 0:00:06.022 ********* 2026-03-10 00:23:16.805768 | orchestrator | changed: [testbed-manager] 2026-03-10 00:23:16.805780 | orchestrator | 2026-03-10 00:23:16.805793 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-10 00:23:16.805805 | orchestrator | Tuesday 10 March 2026 00:23:13 +0000 (0:00:00.545) 0:00:06.567 ********* 2026-03-10 00:23:16.805818 | orchestrator | changed: [testbed-manager] 2026-03-10 00:23:16.805830 | orchestrator | 2026-03-10 00:23:16.805843 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-10 00:23:16.805855 | orchestrator | Tuesday 10 March 2026 00:23:14 +0000 (0:00:01.079) 0:00:07.646 ********* 2026-03-10 00:23:16.805867 | orchestrator | ok: [testbed-manager] 2026-03-10 00:23:16.805903 | orchestrator | 2026-03-10 00:23:16.805946 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-10 00:23:16.805959 | orchestrator | Tuesday 10 March 2026 00:23:15 +0000 (0:00:00.980) 0:00:08.627 ********* 2026-03-10 00:23:16.805972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-10 00:23:16.805985 | orchestrator | 2026-03-10 00:23:16.805997 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-10 00:23:16.806009 | orchestrator | Tuesday 10 March 2026 00:23:15 +0000 (0:00:00.085) 0:00:08.713 ********* 2026-03-10 00:23:16.806075 | orchestrator | changed: [testbed-manager] 2026-03-10 00:23:16.806089 | orchestrator | 2026-03-10 00:23:16.806102 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:23:16.806115 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 00:23:16.806128 | orchestrator | 2026-03-10 00:23:16.806141 | orchestrator | 2026-03-10 00:23:16.806152 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:23:16.806163 | orchestrator | Tuesday 10 March 2026 00:23:16 +0000 (0:00:01.159) 0:00:09.872 ********* 2026-03-10 00:23:16.806174 | orchestrator | =============================================================================== 2026-03-10 00:23:16.806184 | orchestrator | Gathering Facts --------------------------------------------------------- 3.81s 2026-03-10 00:23:16.806195 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2026-03-10 00:23:16.806206 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.13s 2026-03-10 00:23:16.806217 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.08s 2026-03-10 00:23:16.806228 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.98s 2026-03-10 00:23:16.806239 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.55s 2026-03-10 00:23:16.806267 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.54s 2026-03-10 00:23:16.806279 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.09s 2026-03-10 00:23:16.806290 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.09s 2026-03-10 00:23:16.806301 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-03-10 00:23:16.806312 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-10 00:23:16.806323 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.07s 2026-03-10 00:23:16.806334 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-10 00:23:17.157101 | orchestrator | + osism apply sshconfig 2026-03-10 00:23:29.351693 | orchestrator | 2026-03-10 00:23:29 | INFO  | Prepare task for execution of sshconfig. 2026-03-10 00:23:29.428381 | orchestrator | 2026-03-10 00:23:29 | INFO  | Task e6e73e61-6cff-4ef1-a927-269099b09406 (sshconfig) was prepared for execution. 2026-03-10 00:23:29.428480 | orchestrator | 2026-03-10 00:23:29 | INFO  | It takes a moment until task e6e73e61-6cff-4ef1-a927-269099b09406 (sshconfig) has been started and output is visible here. 2026-03-10 00:23:41.659457 | orchestrator | 2026-03-10 00:23:41.659570 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-10 00:23:41.659587 | orchestrator | 2026-03-10 00:23:41.659600 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-10 00:23:41.659612 | orchestrator | Tuesday 10 March 2026 00:23:33 +0000 (0:00:00.165) 0:00:00.165 ********* 2026-03-10 00:23:41.659623 | orchestrator | ok: [testbed-manager] 2026-03-10 00:23:41.659635 | orchestrator | 2026-03-10 00:23:41.659646 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-10 00:23:41.659685 | orchestrator | Tuesday 10 March 2026 00:23:34 +0000 (0:00:00.552) 0:00:00.718 ********* 2026-03-10 00:23:41.659697 | orchestrator | changed: [testbed-manager] 2026-03-10 00:23:41.659711 | orchestrator | 2026-03-10 00:23:41.659730 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-10 00:23:41.659749 | orchestrator | Tuesday 10 March 2026 00:23:34 +0000 (0:00:00.518) 0:00:01.237 ********* 2026-03-10 00:23:41.659767 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-10 00:23:41.659784 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-10 00:23:41.659802 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-10 00:23:41.659819 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-10 00:23:41.659839 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-10 00:23:41.659857 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-10 00:23:41.659877 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-10 00:23:41.659897 | orchestrator | 2026-03-10 00:23:41.659997 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-10 00:23:41.660012 | orchestrator | Tuesday 10 March 2026 00:23:40 +0000 (0:00:05.874) 0:00:07.111 ********* 2026-03-10 00:23:41.660025 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:23:41.660037 | orchestrator | 2026-03-10 00:23:41.660050 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-10 00:23:41.660063 | orchestrator | Tuesday 10 March 2026 00:23:40 +0000 (0:00:00.081) 0:00:07.193 ********* 2026-03-10 00:23:41.660076 | orchestrator | changed: [testbed-manager] 2026-03-10 00:23:41.660088 | orchestrator | 2026-03-10 00:23:41.660101 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:23:41.660116 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:23:41.660129 | orchestrator | 2026-03-10 00:23:41.660141 | orchestrator | 2026-03-10 00:23:41.660154 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:23:41.660166 | orchestrator | Tuesday 10 March 2026 00:23:41 +0000 (0:00:00.562) 0:00:07.755 ********* 2026-03-10 00:23:41.660179 | orchestrator | =============================================================================== 2026-03-10 00:23:41.660192 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.87s 2026-03-10 00:23:41.660204 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.56s 2026-03-10 00:23:41.660217 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.55s 2026-03-10 00:23:41.660233 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.52s 2026-03-10 00:23:41.660253 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2026-03-10 00:23:41.987517 | orchestrator | + osism apply known-hosts 2026-03-10 00:23:54.173136 | orchestrator | 2026-03-10 00:23:54 | INFO  | Prepare task for execution of known-hosts. 2026-03-10 00:23:54.247869 | orchestrator | 2026-03-10 00:23:54 | INFO  | Task 4c79ea84-4e02-42bb-b4e7-beb2885703aa (known-hosts) was prepared for execution. 2026-03-10 00:23:54.248003 | orchestrator | 2026-03-10 00:23:54 | INFO  | It takes a moment until task 4c79ea84-4e02-42bb-b4e7-beb2885703aa (known-hosts) has been started and output is visible here. 2026-03-10 00:24:10.690311 | orchestrator | 2026-03-10 00:24:10.690465 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-10 00:24:10.690479 | orchestrator | 2026-03-10 00:24:10.690487 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-10 00:24:10.690496 | orchestrator | Tuesday 10 March 2026 00:23:58 +0000 (0:00:00.172) 0:00:00.172 ********* 2026-03-10 00:24:10.690504 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-10 00:24:10.690512 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-10 00:24:10.690540 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-10 00:24:10.690548 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-10 00:24:10.690556 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-10 00:24:10.690563 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-10 00:24:10.690570 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-10 00:24:10.690577 | orchestrator | 2026-03-10 00:24:10.690585 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-10 00:24:10.690593 | orchestrator | Tuesday 10 March 2026 00:24:04 +0000 (0:00:06.037) 0:00:06.209 ********* 2026-03-10 00:24:10.690611 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-10 00:24:10.690621 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-10 00:24:10.690628 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-10 00:24:10.690636 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-10 00:24:10.690643 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-10 00:24:10.690650 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-10 00:24:10.690657 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-10 00:24:10.690665 | orchestrator | 2026-03-10 00:24:10.690672 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:10.690679 | orchestrator | Tuesday 10 March 2026 00:24:04 +0000 (0:00:00.175) 0:00:06.385 ********* 2026-03-10 00:24:10.690689 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsnuppIi/k3DjYZWf7x1hp2gi1ad/+SbUcmGmVhWRSvcZ8r0/gUuYF5veEmincg+Yxuk8XNh1z8PBnzl0Gzbhsp6ZO1GWO2fWjhyxvjGBy8Yk+fH4DidoSo3QqmyOIvNSmHjqhhbd0R99NBC82U5wq8aQfwQaAbAZEhyfHKJ3CEr5mndthFullfeyQvDFA+wpUPyz+RJlRiBz5CZ2J8GH4EsHMwuM99FeYx8DzVQ65sLUiWnhlEkEALjuwvUA6s5T7JqofFG5qcz+bqWlylx+OZ4hg0Je8o7HRhbwGnXE2bMzUTLKRa9zU4ZP3urU9b19LyxZFIAO5AuVKALjEx2eOi+I9RfFW8NldhfBwF2PIuBXZga9ML3RDODBXp/cM6Q1kh+OC3KqxUzt2oG18+lwPYiKLynYsUN/5+frZWRs63Jt15k/bevEUCuOR2rEB63WBGsFu9m3KJlysswUDEah2RUnGR6sNTUhehJbp8eVh1NaFlC6TeQea6C0UjGV6sL8=) 2026-03-10 00:24:10.690700 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOp/HP6N9HOHzdAo7Op5wZkytft/DzHxh/4Se2g4LuWuwauw+a0B20RLMPP7/DyfXHvLH0Ecc/r/2v+7qitbWU4=) 2026-03-10 00:24:10.690709 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKrhZe1FROTkykYtio1jxLZv4CdFyxYCU9TcRY9viHbL) 2026-03-10 00:24:10.690718 | orchestrator | 2026-03-10 00:24:10.690726 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:10.690733 | orchestrator | Tuesday 10 March 2026 00:24:05 +0000 (0:00:01.205) 0:00:07.590 ********* 2026-03-10 00:24:10.690761 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwnI0QlNy0uMX1hBwhHf9vPGwRTez0QLH0Qr7Ni69kVySopc7jvHAmzhg05OGgITdKY6/6bYFCpHg6gYkgXTcnPlKTS4+OGAWdjEWtCeBq0iUTH0b3JyjCYwrU7hBiZdK8wexsk4jwn00NPLRQq93Zbrk7YIS+LWG5F7yi8JTAIx+f0NXwmOXUsHRC3/GSa7VuXsfOlXtEkV6Ao7Wg/mNP3kzPA6NXUzq1masBz4ydALlrVGBK1VkwNVNZvupPZwARgERU5fQc+ucU2K/UGnLmwGhOn6C9EPFMHz3jOSjX+rhkwKyvYVhTGuM+SFcMJRAKlBqn3ZlLhqG8MPEDIhF5FMf/DyB+FMomyd60PRTjvoaHwSMplWep/icyr/h5hXZiAS+wKeZhkHhtyL3mWLZ287mZ7HSqUq04bLVKdaguBrpoK7HXlYMN6T2qH7dKk+FtS8/7etAE/4Z7fM2HpcwYEvAEyEaCKUwfo3Rlo/Do30JpwR8qdCu2s5cw3VqoqOk=) 2026-03-10 00:24:10.690775 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCsBrGGHbxBJpMPZcI30UrsM2f/trVvDGabU8j/GzF4vFSNscUWUtOgbsOVL95iJStKiiZaKpfuIsRIPNaWHBkM=) 2026-03-10 00:24:10.690783 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPU4IrUDtT7PHxNYzZagTb8yXqGKziw+7QdRpKwQFMhz) 2026-03-10 00:24:10.690790 | orchestrator | 2026-03-10 00:24:10.690797 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:10.690805 | orchestrator | Tuesday 10 March 2026 00:24:06 +0000 (0:00:01.100) 0:00:08.691 ********* 2026-03-10 00:24:10.690812 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJo9DiEP0w2Rl8gzr/TWkHdpbpkpcfV0780GyTjNy1BK) 2026-03-10 00:24:10.690822 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCo1LgLw25ZDznSLhhYmpNOPTjdpsl6ghtH5I+YGvwCqd0A864osY4Gi7NvESrgNGxEv3MlunbPJUSYEQ05snK+fPuaKp3dRt+iU/rnF6nMneJb7uNNJJxrLs9Wmw4vTwM83S9giTI7iK+M74bkG7TRqYSb73rAopujloPY4tD2cDHEQG7oFZCf/F+us2aY+8dQYoSTuRb20APaO/0+2g9xeT2nxqwQuiiHIFoLe6EgQ0fqjMXJS3CEGZyU6kmBHCK+D7r9/iS1wqUXvE1lImsO84mEr3zZv1khT4VKEsI0ISWiZfFDwm+bsCC7NILo9kda7KRk2BGLRGrGBtvopIJ2SrPDdx9/Ae00/PCYoaMeNAxiW0+uMbhwhenfovdOgvaQZa0DYbFQfm5hK8gRoBWVwW5rkB5GGqYa0TC4chNDMyhORpNuLkj60Ba0ltjdQRFgZm5VNOnGCOrTtMuQcmy+FuNS0/OR8ExkFyYmAv5OX40Y229NfnPVfViWGwXp4XU=) 2026-03-10 00:24:10.690887 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGwiUJFBhrtUsLYIwlV9/Cz/5CxZptjApkOMUL8FS9nFarZp65uEVZSXxWfp/wcsE7+R3q9AuqP1jOJ81DplArc=) 2026-03-10 00:24:10.690922 | orchestrator | 2026-03-10 00:24:10.690936 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:10.690947 | orchestrator | Tuesday 10 March 2026 00:24:08 +0000 (0:00:01.110) 0:00:09.801 ********* 2026-03-10 00:24:10.690956 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOt6d+9IlAxz0yKsBn6AzRyxw2XBd93/j2JuiSQVkthT) 2026-03-10 00:24:10.690965 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+z/L59nLgB5ZPjh0s+GUwmA3bPbVfCJz5oVCLoK9I8O80ZfrHuMscrRGBsZ1HuJxTZe1d8AbFBqGTJGvzWJAHq5YYWdzzSDMPKt5PflkaYk40X6wdFwC+ShcqiumKdwLqsLVIj117K59jLYoik78WTGRznhVjNhYcjj+2b0nY2auMVUU4RoxmynZg6vmO6rHBbSZcFtOcHGhz7G5x6waQuhUrxmnb+tnFkpWyYKDFACKQUb1m37bG+0lHt6YHRMSDp6SYBYlzINjbyWfmBFlu/Y1AKKlOwVxejbOAkzJ85WZ76m5f5Iv8UYsKn4Bi0i4rbfXlNLitC1E/waA3HYbRbAmWnVhbSbTTOjdtpDP1EtPk9JS7wHiSWzv7sx2PHv/C62FtU44uII/IGcTP7n+s4PxS/u5XAPGMqm1szXWvWuKyFUMEYb3QoMRInp0ubPE6dlxOJLRYDYtTd/9gTMtGhqN5hJANJjw2b6MMygRRDsr8A/nRPfLHTSTr/xiXbnk=) 2026-03-10 00:24:10.690974 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKP+NnrwHo74Jg3muyAi1AnhUGfCG9WknFyTUBlDsSXMorLI8U5fqfBVPPuHdI2zI21ucVuRv9pTdq4ymeFDZRY=) 2026-03-10 00:24:10.690982 | orchestrator | 2026-03-10 00:24:10.690990 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:10.690999 | orchestrator | Tuesday 10 March 2026 00:24:09 +0000 (0:00:01.079) 0:00:10.881 ********* 2026-03-10 00:24:10.691007 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGpzohPItffhYRXMEqovuIUrF9VhEmw5G5pPhmzLvHZzTVhJ2B3+nTVUTlRnqIbLAcPdh2NdCRxSMPprG7AYS+w=) 2026-03-10 00:24:10.691016 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDig8J/2WGtAd4enNM9DOQ9ShD7WCmhtqCqAVcK9z5dv) 2026-03-10 00:24:10.691030 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmi5k4OZf82VuSJgFYfQwRgKGP9JnQrL+RhttmHCHvj75yR/FlKnISrMimq3L/JJj0FvJK0/o+iBGiv1o5lKaKqkTblFMLNQDpXF7sx0BvsNKf6pJAzhmSgTGJtEXzyYSyyarYZ7uQYTS99zNS5S8YYWI0+tDYR6uzErdMqeYZRrBgtOm0u/Chu/oikw+C7aC9SOSV70KVlLCKnXFqXuX8ysoO1HSQP1hISo1IbJPT0hF/9K5M6qDLGPNH2l76/m4rn8vOtQs/VL/kpOgwSnTDPM+DEnW3hwvhadQPhvBeWeO6A7iVa3Q4ak4bg3tk7AK4Uy2zFa5Yxkttl1WBAbsO12hJIF9Fa2jA5NFQFEUoqtbT0yhlenZJdrP66sN43sFBQBTOU0jUv3zHV8MzST8j0BH4y2n045tcZIXQrSIJKy7ZQCTE+D5PMgZYqL75XnaQloNI6Vkyi86laL9dqvZCcJHe5UzyYqejAL9nZIVuhymTrLDiAFWRRP8hRm4Oom0=) 2026-03-10 00:24:10.691039 | orchestrator | 2026-03-10 00:24:10.691047 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:10.691056 | orchestrator | Tuesday 10 March 2026 00:24:10 +0000 (0:00:01.121) 0:00:12.003 ********* 2026-03-10 00:24:10.691071 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwXas65hbuK9kOoI14KqUs63q72BlPJxBMbVUaniKO0B9wmUaryft+XVXRIf9cRjqn7GLQj+AIASZu1jgrW4xzcKKhrcEDt7boLOSdWFHx8y4bpB/D1cvJhG/Xm2ndWCxeFUxzze/lInQKWkZ8umEMketyLa2XzbZ2vlXvlayENih6WI3ixUvU8aH3pqYVrrvzFkYYIRcQX19Zmh47WhjoV3wEKHgov0x6WdTu8hxMsxlY4fGQF1CS0ovozzYUw0rJliYqMQxB/uf/YIZCalf2//dqYZnWnQ+xKZMtXKDeKoHKOvMjuKQTB5fJvdJkRx6HWTC66ChFsayOb0ONuKoDJYpvqoUwj+zTGiJ4C1r37nN4mgfaDVb3n9C8KzNvHxl2qBVBqjTq0TFtQluJWWOcCEER7/a2IQ9nrcD7UclTBEj6Tz3acmYXbnWKc5Fbm6JQKjoMi5Rv+7ZSVj4kToaTkHECjTtte1Y3E8CtZyr9c5aC2cJXzKfEUCVB/t0DN8c=) 2026-03-10 00:24:22.466828 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIvVjEXp86QI90yF3dvLKhIc4mPq5bJesEx1THVKLMQjotjuBea6+RlEVLCZdiLDU8571j4qj4Bt9W/aRvTDywI=) 2026-03-10 00:24:22.466957 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHgwMke8TxAHoVYBHi9lyXv5NxAkJStUm8Pmwj2VVtsl) 2026-03-10 00:24:22.466975 | orchestrator | 2026-03-10 00:24:22.466987 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:22.466999 | orchestrator | Tuesday 10 March 2026 00:24:11 +0000 (0:00:01.077) 0:00:13.080 ********* 2026-03-10 00:24:22.467009 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEsv2jGNoQWx2f9256PVfZ8+hIFe8WdrWA51Mqn2QMQqH6rntC5P7fEgYlvNAhGR/waTuMuddPak430oeGILVfM=) 2026-03-10 00:24:22.467021 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqaPWTXZMRx3X5Mygzexua6Y/Db9oo5Mqhu425OlktzqmeA7NvnBNrCtdWMC+4aK4n42/8wax+WCfK1X8weYYTC2oUoIZb17lCqRXCpUBPf00tyZPbs+1JktgLUE0YoZNsDfcmST/A02HMpFQ+nLErAL+H1aLCVQ8iVcIeL+8dg+xH7MIs6nyM7+ZdwC2+6OjaCpzlQt/O0Mj0pgJLrNZPcdGQVmX4MQ5c5Vp5MvGm8+BT241GVZ/wMDVm5ptrAhG0BF8YdWyIt+gvllSGSaV5BE41V3vK+0yMcked58Y3ovyIYJpQVLMby8iYYFne4JMPFzhjVpIieLkX/O4+1mNSYSJW6AKsxU9DULW8R4tNsRd0ST341nvsnZEeuMZUm82ZAmUgj+A6uXz7C/VkYjP+DgIOLO3WSW8rDHvRnaFAPnMhR7braULg8hQNguu7YFsvyI4dh4+f+GgDwgpUsT7ykxrNEt5n1FmBNjwMMhJ+ag9kQ0b6ZHxMsxQwmSFN9Zs=) 2026-03-10 00:24:22.467034 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVy0L+aGRDN7L1u1JK7elf6i8kHKCS41wrds8YwzTjw) 2026-03-10 00:24:22.467044 | orchestrator | 2026-03-10 00:24:22.467054 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-10 00:24:22.467065 | orchestrator | Tuesday 10 March 2026 00:24:12 +0000 (0:00:01.098) 0:00:14.179 ********* 2026-03-10 00:24:22.467075 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-10 00:24:22.467086 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-10 00:24:22.467095 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-10 00:24:22.467105 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-10 00:24:22.467115 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-10 00:24:22.467141 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-10 00:24:22.467174 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-10 00:24:22.467185 | orchestrator | 2026-03-10 00:24:22.467194 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-10 00:24:22.467205 | orchestrator | Tuesday 10 March 2026 00:24:17 +0000 (0:00:05.403) 0:00:19.583 ********* 2026-03-10 00:24:22.467216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-10 00:24:22.467228 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-10 00:24:22.467238 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-10 00:24:22.467248 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-10 00:24:22.467257 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-10 00:24:22.467267 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-10 00:24:22.467276 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-10 00:24:22.467286 | orchestrator | 2026-03-10 00:24:22.467296 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:22.467306 | orchestrator | Tuesday 10 March 2026 00:24:18 +0000 (0:00:00.195) 0:00:19.778 ********* 2026-03-10 00:24:22.467316 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOp/HP6N9HOHzdAo7Op5wZkytft/DzHxh/4Se2g4LuWuwauw+a0B20RLMPP7/DyfXHvLH0Ecc/r/2v+7qitbWU4=) 2026-03-10 00:24:22.467344 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsnuppIi/k3DjYZWf7x1hp2gi1ad/+SbUcmGmVhWRSvcZ8r0/gUuYF5veEmincg+Yxuk8XNh1z8PBnzl0Gzbhsp6ZO1GWO2fWjhyxvjGBy8Yk+fH4DidoSo3QqmyOIvNSmHjqhhbd0R99NBC82U5wq8aQfwQaAbAZEhyfHKJ3CEr5mndthFullfeyQvDFA+wpUPyz+RJlRiBz5CZ2J8GH4EsHMwuM99FeYx8DzVQ65sLUiWnhlEkEALjuwvUA6s5T7JqofFG5qcz+bqWlylx+OZ4hg0Je8o7HRhbwGnXE2bMzUTLKRa9zU4ZP3urU9b19LyxZFIAO5AuVKALjEx2eOi+I9RfFW8NldhfBwF2PIuBXZga9ML3RDODBXp/cM6Q1kh+OC3KqxUzt2oG18+lwPYiKLynYsUN/5+frZWRs63Jt15k/bevEUCuOR2rEB63WBGsFu9m3KJlysswUDEah2RUnGR6sNTUhehJbp8eVh1NaFlC6TeQea6C0UjGV6sL8=) 2026-03-10 00:24:22.467355 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKrhZe1FROTkykYtio1jxLZv4CdFyxYCU9TcRY9viHbL) 2026-03-10 00:24:22.467365 | orchestrator | 2026-03-10 00:24:22.467377 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:22.467388 | orchestrator | Tuesday 10 March 2026 00:24:19 +0000 (0:00:01.168) 0:00:20.947 ********* 2026-03-10 00:24:22.467400 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwnI0QlNy0uMX1hBwhHf9vPGwRTez0QLH0Qr7Ni69kVySopc7jvHAmzhg05OGgITdKY6/6bYFCpHg6gYkgXTcnPlKTS4+OGAWdjEWtCeBq0iUTH0b3JyjCYwrU7hBiZdK8wexsk4jwn00NPLRQq93Zbrk7YIS+LWG5F7yi8JTAIx+f0NXwmOXUsHRC3/GSa7VuXsfOlXtEkV6Ao7Wg/mNP3kzPA6NXUzq1masBz4ydALlrVGBK1VkwNVNZvupPZwARgERU5fQc+ucU2K/UGnLmwGhOn6C9EPFMHz3jOSjX+rhkwKyvYVhTGuM+SFcMJRAKlBqn3ZlLhqG8MPEDIhF5FMf/DyB+FMomyd60PRTjvoaHwSMplWep/icyr/h5hXZiAS+wKeZhkHhtyL3mWLZ287mZ7HSqUq04bLVKdaguBrpoK7HXlYMN6T2qH7dKk+FtS8/7etAE/4Z7fM2HpcwYEvAEyEaCKUwfo3Rlo/Do30JpwR8qdCu2s5cw3VqoqOk=) 2026-03-10 00:24:22.467419 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBCsBrGGHbxBJpMPZcI30UrsM2f/trVvDGabU8j/GzF4vFSNscUWUtOgbsOVL95iJStKiiZaKpfuIsRIPNaWHBkM=) 2026-03-10 00:24:22.467431 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPU4IrUDtT7PHxNYzZagTb8yXqGKziw+7QdRpKwQFMhz) 2026-03-10 00:24:22.467442 | orchestrator | 2026-03-10 00:24:22.467453 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:22.467464 | orchestrator | Tuesday 10 March 2026 00:24:20 +0000 (0:00:01.085) 0:00:22.032 ********* 2026-03-10 00:24:22.467476 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCo1LgLw25ZDznSLhhYmpNOPTjdpsl6ghtH5I+YGvwCqd0A864osY4Gi7NvESrgNGxEv3MlunbPJUSYEQ05snK+fPuaKp3dRt+iU/rnF6nMneJb7uNNJJxrLs9Wmw4vTwM83S9giTI7iK+M74bkG7TRqYSb73rAopujloPY4tD2cDHEQG7oFZCf/F+us2aY+8dQYoSTuRb20APaO/0+2g9xeT2nxqwQuiiHIFoLe6EgQ0fqjMXJS3CEGZyU6kmBHCK+D7r9/iS1wqUXvE1lImsO84mEr3zZv1khT4VKEsI0ISWiZfFDwm+bsCC7NILo9kda7KRk2BGLRGrGBtvopIJ2SrPDdx9/Ae00/PCYoaMeNAxiW0+uMbhwhenfovdOgvaQZa0DYbFQfm5hK8gRoBWVwW5rkB5GGqYa0TC4chNDMyhORpNuLkj60Ba0ltjdQRFgZm5VNOnGCOrTtMuQcmy+FuNS0/OR8ExkFyYmAv5OX40Y229NfnPVfViWGwXp4XU=) 2026-03-10 00:24:22.467488 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGwiUJFBhrtUsLYIwlV9/Cz/5CxZptjApkOMUL8FS9nFarZp65uEVZSXxWfp/wcsE7+R3q9AuqP1jOJ81DplArc=) 2026-03-10 00:24:22.467499 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIJo9DiEP0w2Rl8gzr/TWkHdpbpkpcfV0780GyTjNy1BK) 2026-03-10 00:24:22.467510 | orchestrator | 2026-03-10 00:24:22.467522 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:22.467533 | orchestrator | Tuesday 10 March 2026 00:24:21 +0000 (0:00:01.101) 0:00:23.134 ********* 2026-03-10 00:24:22.467544 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOt6d+9IlAxz0yKsBn6AzRyxw2XBd93/j2JuiSQVkthT) 2026-03-10 00:24:22.467561 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC+z/L59nLgB5ZPjh0s+GUwmA3bPbVfCJz5oVCLoK9I8O80ZfrHuMscrRGBsZ1HuJxTZe1d8AbFBqGTJGvzWJAHq5YYWdzzSDMPKt5PflkaYk40X6wdFwC+ShcqiumKdwLqsLVIj117K59jLYoik78WTGRznhVjNhYcjj+2b0nY2auMVUU4RoxmynZg6vmO6rHBbSZcFtOcHGhz7G5x6waQuhUrxmnb+tnFkpWyYKDFACKQUb1m37bG+0lHt6YHRMSDp6SYBYlzINjbyWfmBFlu/Y1AKKlOwVxejbOAkzJ85WZ76m5f5Iv8UYsKn4Bi0i4rbfXlNLitC1E/waA3HYbRbAmWnVhbSbTTOjdtpDP1EtPk9JS7wHiSWzv7sx2PHv/C62FtU44uII/IGcTP7n+s4PxS/u5XAPGMqm1szXWvWuKyFUMEYb3QoMRInp0ubPE6dlxOJLRYDYtTd/9gTMtGhqN5hJANJjw2b6MMygRRDsr8A/nRPfLHTSTr/xiXbnk=) 2026-03-10 00:24:22.467586 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKP+NnrwHo74Jg3muyAi1AnhUGfCG9WknFyTUBlDsSXMorLI8U5fqfBVPPuHdI2zI21ucVuRv9pTdq4ymeFDZRY=) 2026-03-10 00:24:26.959770 | orchestrator | 2026-03-10 00:24:26.959930 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:26.959963 | orchestrator | Tuesday 10 March 2026 00:24:22 +0000 (0:00:01.068) 0:00:24.203 ********* 2026-03-10 00:24:26.959992 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGpzohPItffhYRXMEqovuIUrF9VhEmw5G5pPhmzLvHZzTVhJ2B3+nTVUTlRnqIbLAcPdh2NdCRxSMPprG7AYS+w=) 2026-03-10 00:24:26.960035 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDmi5k4OZf82VuSJgFYfQwRgKGP9JnQrL+RhttmHCHvj75yR/FlKnISrMimq3L/JJj0FvJK0/o+iBGiv1o5lKaKqkTblFMLNQDpXF7sx0BvsNKf6pJAzhmSgTGJtEXzyYSyyarYZ7uQYTS99zNS5S8YYWI0+tDYR6uzErdMqeYZRrBgtOm0u/Chu/oikw+C7aC9SOSV70KVlLCKnXFqXuX8ysoO1HSQP1hISo1IbJPT0hF/9K5M6qDLGPNH2l76/m4rn8vOtQs/VL/kpOgwSnTDPM+DEnW3hwvhadQPhvBeWeO6A7iVa3Q4ak4bg3tk7AK4Uy2zFa5Yxkttl1WBAbsO12hJIF9Fa2jA5NFQFEUoqtbT0yhlenZJdrP66sN43sFBQBTOU0jUv3zHV8MzST8j0BH4y2n045tcZIXQrSIJKy7ZQCTE+D5PMgZYqL75XnaQloNI6Vkyi86laL9dqvZCcJHe5UzyYqejAL9nZIVuhymTrLDiAFWRRP8hRm4Oom0=) 2026-03-10 00:24:26.960079 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDig8J/2WGtAd4enNM9DOQ9ShD7WCmhtqCqAVcK9z5dv) 2026-03-10 00:24:26.960093 | orchestrator | 2026-03-10 00:24:26.960105 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:26.960116 | orchestrator | Tuesday 10 March 2026 00:24:23 +0000 (0:00:01.041) 0:00:25.244 ********* 2026-03-10 00:24:26.960128 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCwXas65hbuK9kOoI14KqUs63q72BlPJxBMbVUaniKO0B9wmUaryft+XVXRIf9cRjqn7GLQj+AIASZu1jgrW4xzcKKhrcEDt7boLOSdWFHx8y4bpB/D1cvJhG/Xm2ndWCxeFUxzze/lInQKWkZ8umEMketyLa2XzbZ2vlXvlayENih6WI3ixUvU8aH3pqYVrrvzFkYYIRcQX19Zmh47WhjoV3wEKHgov0x6WdTu8hxMsxlY4fGQF1CS0ovozzYUw0rJliYqMQxB/uf/YIZCalf2//dqYZnWnQ+xKZMtXKDeKoHKOvMjuKQTB5fJvdJkRx6HWTC66ChFsayOb0ONuKoDJYpvqoUwj+zTGiJ4C1r37nN4mgfaDVb3n9C8KzNvHxl2qBVBqjTq0TFtQluJWWOcCEER7/a2IQ9nrcD7UclTBEj6Tz3acmYXbnWKc5Fbm6JQKjoMi5Rv+7ZSVj4kToaTkHECjTtte1Y3E8CtZyr9c5aC2cJXzKfEUCVB/t0DN8c=) 2026-03-10 00:24:26.960140 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIvVjEXp86QI90yF3dvLKhIc4mPq5bJesEx1THVKLMQjotjuBea6+RlEVLCZdiLDU8571j4qj4Bt9W/aRvTDywI=) 2026-03-10 00:24:26.960152 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHgwMke8TxAHoVYBHi9lyXv5NxAkJStUm8Pmwj2VVtsl) 2026-03-10 00:24:26.960162 | orchestrator | 2026-03-10 00:24:26.960174 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-10 00:24:26.960185 | orchestrator | Tuesday 10 March 2026 00:24:24 +0000 (0:00:01.124) 0:00:26.369 ********* 2026-03-10 00:24:26.960196 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCqaPWTXZMRx3X5Mygzexua6Y/Db9oo5Mqhu425OlktzqmeA7NvnBNrCtdWMC+4aK4n42/8wax+WCfK1X8weYYTC2oUoIZb17lCqRXCpUBPf00tyZPbs+1JktgLUE0YoZNsDfcmST/A02HMpFQ+nLErAL+H1aLCVQ8iVcIeL+8dg+xH7MIs6nyM7+ZdwC2+6OjaCpzlQt/O0Mj0pgJLrNZPcdGQVmX4MQ5c5Vp5MvGm8+BT241GVZ/wMDVm5ptrAhG0BF8YdWyIt+gvllSGSaV5BE41V3vK+0yMcked58Y3ovyIYJpQVLMby8iYYFne4JMPFzhjVpIieLkX/O4+1mNSYSJW6AKsxU9DULW8R4tNsRd0ST341nvsnZEeuMZUm82ZAmUgj+A6uXz7C/VkYjP+DgIOLO3WSW8rDHvRnaFAPnMhR7braULg8hQNguu7YFsvyI4dh4+f+GgDwgpUsT7ykxrNEt5n1FmBNjwMMhJ+ag9kQ0b6ZHxMsxQwmSFN9Zs=) 2026-03-10 00:24:26.960222 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBEsv2jGNoQWx2f9256PVfZ8+hIFe8WdrWA51Mqn2QMQqH6rntC5P7fEgYlvNAhGR/waTuMuddPak430oeGILVfM=) 2026-03-10 00:24:26.960233 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVy0L+aGRDN7L1u1JK7elf6i8kHKCS41wrds8YwzTjw) 2026-03-10 00:24:26.960244 | orchestrator | 2026-03-10 00:24:26.960255 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-10 00:24:26.960266 | orchestrator | Tuesday 10 March 2026 00:24:25 +0000 (0:00:01.094) 0:00:27.464 ********* 2026-03-10 00:24:26.960277 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-10 00:24:26.960289 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-10 00:24:26.960300 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-10 00:24:26.960311 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-10 00:24:26.960322 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-10 00:24:26.960332 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-10 00:24:26.960343 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-10 00:24:26.960355 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:24:26.960366 | orchestrator | 2026-03-10 00:24:26.960404 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-10 00:24:26.960422 | orchestrator | Tuesday 10 March 2026 00:24:25 +0000 (0:00:00.175) 0:00:27.639 ********* 2026-03-10 00:24:26.960452 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:24:26.960470 | orchestrator | 2026-03-10 00:24:26.960487 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-10 00:24:26.960505 | orchestrator | Tuesday 10 March 2026 00:24:25 +0000 (0:00:00.045) 0:00:27.684 ********* 2026-03-10 00:24:26.960524 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:24:26.960543 | orchestrator | 2026-03-10 00:24:26.960562 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-10 00:24:26.960580 | orchestrator | Tuesday 10 March 2026 00:24:26 +0000 (0:00:00.044) 0:00:27.729 ********* 2026-03-10 00:24:26.960598 | orchestrator | changed: [testbed-manager] 2026-03-10 00:24:26.960617 | orchestrator | 2026-03-10 00:24:26.960634 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:24:26.960654 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 00:24:26.960675 | orchestrator | 2026-03-10 00:24:26.960694 | orchestrator | 2026-03-10 00:24:26.960713 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:24:26.960725 | orchestrator | Tuesday 10 March 2026 00:24:26 +0000 (0:00:00.737) 0:00:28.466 ********* 2026-03-10 00:24:26.960736 | orchestrator | =============================================================================== 2026-03-10 00:24:26.960747 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.04s 2026-03-10 00:24:26.960758 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.40s 2026-03-10 00:24:26.960770 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.21s 2026-03-10 00:24:26.960781 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2026-03-10 00:24:26.960791 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-10 00:24:26.960802 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.12s 2026-03-10 00:24:26.960813 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.11s 2026-03-10 00:24:26.960824 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-10 00:24:26.960834 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-10 00:24:26.960845 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.10s 2026-03-10 00:24:26.960856 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-10 00:24:26.960867 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2026-03-10 00:24:26.960878 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-10 00:24:26.960960 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.08s 2026-03-10 00:24:26.960975 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.07s 2026-03-10 00:24:26.960987 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-10 00:24:26.960997 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.74s 2026-03-10 00:24:26.961008 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.20s 2026-03-10 00:24:26.961020 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.18s 2026-03-10 00:24:26.961031 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.18s 2026-03-10 00:24:27.287846 | orchestrator | + osism apply squid 2026-03-10 00:24:39.420792 | orchestrator | 2026-03-10 00:24:39 | INFO  | Prepare task for execution of squid. 2026-03-10 00:24:39.493129 | orchestrator | 2026-03-10 00:24:39 | INFO  | Task 815fb6ca-1184-4c17-a57e-3cd6302dea92 (squid) was prepared for execution. 2026-03-10 00:24:39.493240 | orchestrator | 2026-03-10 00:24:39 | INFO  | It takes a moment until task 815fb6ca-1184-4c17-a57e-3cd6302dea92 (squid) has been started and output is visible here. 2026-03-10 00:26:41.194451 | orchestrator | 2026-03-10 00:26:41.194568 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-10 00:26:41.194589 | orchestrator | 2026-03-10 00:26:41.194603 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-10 00:26:41.194615 | orchestrator | Tuesday 10 March 2026 00:24:43 +0000 (0:00:00.166) 0:00:00.166 ********* 2026-03-10 00:26:41.194627 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-10 00:26:41.194640 | orchestrator | 2026-03-10 00:26:41.194652 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-10 00:26:41.194666 | orchestrator | Tuesday 10 March 2026 00:24:43 +0000 (0:00:00.085) 0:00:00.251 ********* 2026-03-10 00:26:41.194678 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:41.194690 | orchestrator | 2026-03-10 00:26:41.194703 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-10 00:26:41.194716 | orchestrator | Tuesday 10 March 2026 00:24:45 +0000 (0:00:01.589) 0:00:01.841 ********* 2026-03-10 00:26:41.194728 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-10 00:26:41.194739 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-10 00:26:41.194752 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-10 00:26:41.194765 | orchestrator | 2026-03-10 00:26:41.194777 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-10 00:26:41.194790 | orchestrator | Tuesday 10 March 2026 00:24:46 +0000 (0:00:01.191) 0:00:03.032 ********* 2026-03-10 00:26:41.194803 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-10 00:26:41.194815 | orchestrator | 2026-03-10 00:26:41.194827 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-10 00:26:41.194838 | orchestrator | Tuesday 10 March 2026 00:24:47 +0000 (0:00:01.159) 0:00:04.191 ********* 2026-03-10 00:26:41.194914 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:41.194929 | orchestrator | 2026-03-10 00:26:41.194941 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-10 00:26:41.194989 | orchestrator | Tuesday 10 March 2026 00:24:48 +0000 (0:00:00.381) 0:00:04.573 ********* 2026-03-10 00:26:41.195008 | orchestrator | changed: [testbed-manager] 2026-03-10 00:26:41.195024 | orchestrator | 2026-03-10 00:26:41.195038 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-10 00:26:41.195054 | orchestrator | Tuesday 10 March 2026 00:24:49 +0000 (0:00:00.950) 0:00:05.523 ********* 2026-03-10 00:26:41.195067 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-10 00:26:41.195081 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:41.195093 | orchestrator | 2026-03-10 00:26:41.195106 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-10 00:26:41.195119 | orchestrator | Tuesday 10 March 2026 00:25:28 +0000 (0:00:38.874) 0:00:44.397 ********* 2026-03-10 00:26:41.195132 | orchestrator | changed: [testbed-manager] 2026-03-10 00:26:41.195145 | orchestrator | 2026-03-10 00:26:41.195158 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-10 00:26:41.195172 | orchestrator | Tuesday 10 March 2026 00:25:40 +0000 (0:00:11.996) 0:00:56.394 ********* 2026-03-10 00:26:41.195185 | orchestrator | Pausing for 60 seconds 2026-03-10 00:26:41.195199 | orchestrator | changed: [testbed-manager] 2026-03-10 00:26:41.195211 | orchestrator | 2026-03-10 00:26:41.195224 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-10 00:26:41.195237 | orchestrator | Tuesday 10 March 2026 00:26:40 +0000 (0:01:00.100) 0:01:56.494 ********* 2026-03-10 00:26:41.195250 | orchestrator | ok: [testbed-manager] 2026-03-10 00:26:41.195262 | orchestrator | 2026-03-10 00:26:41.195275 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-10 00:26:41.195315 | orchestrator | Tuesday 10 March 2026 00:26:40 +0000 (0:00:00.068) 0:01:56.562 ********* 2026-03-10 00:26:41.195328 | orchestrator | changed: [testbed-manager] 2026-03-10 00:26:41.195341 | orchestrator | 2026-03-10 00:26:41.195354 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:26:41.195367 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:26:41.195378 | orchestrator | 2026-03-10 00:26:41.195391 | orchestrator | 2026-03-10 00:26:41.195403 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:26:41.195416 | orchestrator | Tuesday 10 March 2026 00:26:40 +0000 (0:00:00.662) 0:01:57.224 ********* 2026-03-10 00:26:41.195427 | orchestrator | =============================================================================== 2026-03-10 00:26:41.195439 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.10s 2026-03-10 00:26:41.195452 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 38.87s 2026-03-10 00:26:41.195463 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.00s 2026-03-10 00:26:41.195474 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.59s 2026-03-10 00:26:41.195486 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.19s 2026-03-10 00:26:41.195497 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.16s 2026-03-10 00:26:41.195508 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2026-03-10 00:26:41.195519 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.66s 2026-03-10 00:26:41.195529 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.38s 2026-03-10 00:26:41.195540 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.09s 2026-03-10 00:26:41.195550 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-10 00:26:41.556360 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-10 00:26:41.556459 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-10 00:26:41.561751 | orchestrator | + set -e 2026-03-10 00:26:41.561789 | orchestrator | + NAMESPACE=kolla 2026-03-10 00:26:41.561803 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-10 00:26:41.566453 | orchestrator | ++ semver latest 9.0.0 2026-03-10 00:26:41.618121 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-10 00:26:41.618213 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-10 00:26:41.618421 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-10 00:26:53.594572 | orchestrator | 2026-03-10 00:26:53 | INFO  | Prepare task for execution of operator. 2026-03-10 00:26:53.669587 | orchestrator | 2026-03-10 00:26:53 | INFO  | Task 110f7cb7-97d7-4d2c-b4d5-e689f731e20c (operator) was prepared for execution. 2026-03-10 00:26:53.669667 | orchestrator | 2026-03-10 00:26:53 | INFO  | It takes a moment until task 110f7cb7-97d7-4d2c-b4d5-e689f731e20c (operator) has been started and output is visible here. 2026-03-10 00:27:09.957994 | orchestrator | 2026-03-10 00:27:09.958162 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-10 00:27:09.958182 | orchestrator | 2026-03-10 00:27:09.958194 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 00:27:09.958207 | orchestrator | Tuesday 10 March 2026 00:26:58 +0000 (0:00:00.152) 0:00:00.152 ********* 2026-03-10 00:27:09.958219 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:09.958232 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:09.958243 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:09.958254 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:09.958265 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:09.958280 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:09.958292 | orchestrator | 2026-03-10 00:27:09.958303 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-10 00:27:09.958341 | orchestrator | Tuesday 10 March 2026 00:27:01 +0000 (0:00:03.399) 0:00:03.551 ********* 2026-03-10 00:27:09.958353 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:09.958364 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:09.958375 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:09.958386 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:09.958397 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:09.958407 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:09.958418 | orchestrator | 2026-03-10 00:27:09.958429 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-10 00:27:09.958440 | orchestrator | 2026-03-10 00:27:09.958451 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-10 00:27:09.958463 | orchestrator | Tuesday 10 March 2026 00:27:02 +0000 (0:00:00.837) 0:00:04.388 ********* 2026-03-10 00:27:09.958474 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:09.958485 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:09.958497 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:09.958510 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:09.958522 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:09.958534 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:09.958547 | orchestrator | 2026-03-10 00:27:09.958559 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-10 00:27:09.958572 | orchestrator | Tuesday 10 March 2026 00:27:02 +0000 (0:00:00.189) 0:00:04.578 ********* 2026-03-10 00:27:09.958584 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:27:09.958597 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:27:09.958608 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:27:09.958621 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:27:09.958651 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:27:09.958664 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:27:09.958676 | orchestrator | 2026-03-10 00:27:09.958688 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-10 00:27:09.958702 | orchestrator | Tuesday 10 March 2026 00:27:02 +0000 (0:00:00.207) 0:00:04.786 ********* 2026-03-10 00:27:09.958714 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:27:09.958728 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:27:09.958741 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:27:09.958753 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:27:09.958766 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:27:09.958777 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:27:09.958788 | orchestrator | 2026-03-10 00:27:09.958799 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-10 00:27:09.958810 | orchestrator | Tuesday 10 March 2026 00:27:03 +0000 (0:00:00.590) 0:00:05.376 ********* 2026-03-10 00:27:09.958821 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:27:09.958832 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:27:09.958873 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:27:09.958885 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:27:09.958896 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:27:09.958907 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:27:09.958919 | orchestrator | 2026-03-10 00:27:09.958930 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-10 00:27:09.958941 | orchestrator | Tuesday 10 March 2026 00:27:04 +0000 (0:00:00.835) 0:00:06.212 ********* 2026-03-10 00:27:09.958952 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-10 00:27:09.958963 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-10 00:27:09.958974 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-10 00:27:09.958985 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-10 00:27:09.958996 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-10 00:27:09.959007 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-10 00:27:09.959018 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-10 00:27:09.959029 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-10 00:27:09.959048 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-10 00:27:09.959059 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-10 00:27:09.959073 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-10 00:27:09.959091 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-10 00:27:09.959103 | orchestrator | 2026-03-10 00:27:09.959114 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-10 00:27:09.959125 | orchestrator | Tuesday 10 March 2026 00:27:05 +0000 (0:00:01.186) 0:00:07.399 ********* 2026-03-10 00:27:09.959136 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:27:09.959147 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:27:09.959157 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:27:09.959168 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:27:09.959179 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:27:09.959190 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:27:09.959203 | orchestrator | 2026-03-10 00:27:09.959221 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-10 00:27:09.959238 | orchestrator | Tuesday 10 March 2026 00:27:06 +0000 (0:00:01.265) 0:00:08.664 ********* 2026-03-10 00:27:09.959255 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:27:09.959274 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:27:09.959292 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:27:09.959310 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:27:09.959328 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:27:09.959371 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-10 00:27:09.959391 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-10 00:27:09.959410 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-10 00:27:09.959426 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-10 00:27:09.959437 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-10 00:27:09.959448 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-10 00:27:09.959458 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-10 00:27:09.959469 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:27:09.959480 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:27:09.959491 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-10 00:27:09.959502 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-10 00:27:09.959520 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-10 00:27:09.959535 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:27:09.959552 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:27:09.959570 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:27:09.959589 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-10 00:27:09.959605 | orchestrator | 2026-03-10 00:27:09.959617 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-10 00:27:09.959629 | orchestrator | Tuesday 10 March 2026 00:27:07 +0000 (0:00:01.265) 0:00:09.930 ********* 2026-03-10 00:27:09.959640 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:27:09.959650 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:27:09.959661 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:27:09.959672 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:27:09.959683 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:27:09.959694 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:27:09.959704 | orchestrator | 2026-03-10 00:27:09.959715 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-10 00:27:09.959736 | orchestrator | Tuesday 10 March 2026 00:27:07 +0000 (0:00:00.157) 0:00:10.088 ********* 2026-03-10 00:27:09.959747 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:27:09.959758 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:27:09.959769 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:27:09.959779 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:27:09.959790 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:27:09.959800 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:27:09.959811 | orchestrator | 2026-03-10 00:27:09.959822 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-10 00:27:09.959833 | orchestrator | Tuesday 10 March 2026 00:27:08 +0000 (0:00:00.211) 0:00:10.300 ********* 2026-03-10 00:27:09.959904 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:27:09.959916 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:27:09.959928 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:27:09.959939 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:27:09.959950 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:27:09.959962 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:27:09.959973 | orchestrator | 2026-03-10 00:27:09.959984 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-10 00:27:09.959996 | orchestrator | Tuesday 10 March 2026 00:27:08 +0000 (0:00:00.607) 0:00:10.907 ********* 2026-03-10 00:27:09.960007 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:27:09.960019 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:27:09.960030 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:27:09.960041 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:27:09.960053 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:27:09.960064 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:27:09.960075 | orchestrator | 2026-03-10 00:27:09.960087 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-10 00:27:09.960098 | orchestrator | Tuesday 10 March 2026 00:27:09 +0000 (0:00:00.201) 0:00:11.108 ********* 2026-03-10 00:27:09.960110 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 00:27:09.960122 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:27:09.960133 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 00:27:09.960144 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:27:09.960156 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 00:27:09.960167 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 00:27:09.960179 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:27:09.960190 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-10 00:27:09.960202 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-10 00:27:09.960213 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:27:09.960224 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:27:09.960235 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:27:09.960247 | orchestrator | 2026-03-10 00:27:09.960258 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-10 00:27:09.960270 | orchestrator | Tuesday 10 March 2026 00:27:09 +0000 (0:00:00.675) 0:00:11.784 ********* 2026-03-10 00:27:09.960281 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:27:09.960293 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:27:09.960304 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:27:09.960315 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:27:09.960326 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:27:09.960338 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:27:09.960349 | orchestrator | 2026-03-10 00:27:09.960360 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-10 00:27:09.960372 | orchestrator | Tuesday 10 March 2026 00:27:09 +0000 (0:00:00.147) 0:00:11.931 ********* 2026-03-10 00:27:09.960383 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:27:09.960395 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:27:09.960406 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:27:09.960425 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:27:09.960447 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:27:11.271982 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:27:11.272102 | orchestrator | 2026-03-10 00:27:11.272125 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-10 00:27:11.272144 | orchestrator | Tuesday 10 March 2026 00:27:09 +0000 (0:00:00.141) 0:00:12.073 ********* 2026-03-10 00:27:11.272161 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:27:11.272177 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:27:11.272194 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:27:11.272210 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:27:11.272227 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:27:11.272244 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:27:11.272260 | orchestrator | 2026-03-10 00:27:11.272278 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-10 00:27:11.272296 | orchestrator | Tuesday 10 March 2026 00:27:10 +0000 (0:00:00.156) 0:00:12.230 ********* 2026-03-10 00:27:11.272313 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:27:11.272332 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:27:11.272350 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:27:11.272369 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:27:11.272387 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:27:11.272406 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:27:11.272418 | orchestrator | 2026-03-10 00:27:11.272429 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-10 00:27:11.272440 | orchestrator | Tuesday 10 March 2026 00:27:10 +0000 (0:00:00.658) 0:00:12.888 ********* 2026-03-10 00:27:11.272453 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:27:11.272466 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:27:11.272479 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:27:11.272492 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:27:11.272504 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:27:11.272516 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:27:11.272529 | orchestrator | 2026-03-10 00:27:11.272542 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:27:11.272556 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 00:27:11.272593 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 00:27:11.272607 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 00:27:11.272620 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 00:27:11.272632 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 00:27:11.272644 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 00:27:11.272656 | orchestrator | 2026-03-10 00:27:11.272669 | orchestrator | 2026-03-10 00:27:11.272681 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:27:11.272694 | orchestrator | Tuesday 10 March 2026 00:27:11 +0000 (0:00:00.230) 0:00:13.119 ********* 2026-03-10 00:27:11.272706 | orchestrator | =============================================================================== 2026-03-10 00:27:11.272718 | orchestrator | Gathering Facts --------------------------------------------------------- 3.40s 2026-03-10 00:27:11.272730 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.27s 2026-03-10 00:27:11.272743 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.27s 2026-03-10 00:27:11.272780 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2026-03-10 00:27:11.272793 | orchestrator | Do not require tty for all users ---------------------------------------- 0.84s 2026-03-10 00:27:11.272805 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-03-10 00:27:11.272816 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.68s 2026-03-10 00:27:11.272828 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.66s 2026-03-10 00:27:11.272878 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.61s 2026-03-10 00:27:11.272896 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.59s 2026-03-10 00:27:11.272914 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2026-03-10 00:27:11.272932 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.21s 2026-03-10 00:27:11.272951 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.21s 2026-03-10 00:27:11.272969 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.20s 2026-03-10 00:27:11.272987 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.19s 2026-03-10 00:27:11.273007 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2026-03-10 00:27:11.273027 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.16s 2026-03-10 00:27:11.273047 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.15s 2026-03-10 00:27:11.273066 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2026-03-10 00:27:11.614740 | orchestrator | + osism apply --environment custom facts 2026-03-10 00:27:13.716544 | orchestrator | 2026-03-10 00:27:13 | INFO  | Trying to run play facts in environment custom 2026-03-10 00:27:23.723404 | orchestrator | 2026-03-10 00:27:23 | INFO  | Prepare task for execution of facts. 2026-03-10 00:27:23.798876 | orchestrator | 2026-03-10 00:27:23 | INFO  | Task 9304ffc5-32d6-4ed2-9a73-b47fcf93144c (facts) was prepared for execution. 2026-03-10 00:27:23.798943 | orchestrator | 2026-03-10 00:27:23 | INFO  | It takes a moment until task 9304ffc5-32d6-4ed2-9a73-b47fcf93144c (facts) has been started and output is visible here. 2026-03-10 00:28:08.323306 | orchestrator | 2026-03-10 00:28:08.323423 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-10 00:28:08.323440 | orchestrator | 2026-03-10 00:28:08.323451 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-10 00:28:08.323477 | orchestrator | Tuesday 10 March 2026 00:27:28 +0000 (0:00:00.075) 0:00:00.075 ********* 2026-03-10 00:28:08.323488 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:08.323499 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:28:08.323509 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:28:08.323519 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:28:08.323529 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:28:08.323538 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:28:08.323548 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:28:08.323558 | orchestrator | 2026-03-10 00:28:08.323568 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-10 00:28:08.323578 | orchestrator | Tuesday 10 March 2026 00:27:29 +0000 (0:00:01.314) 0:00:01.390 ********* 2026-03-10 00:28:08.323587 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:08.323597 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:28:08.323607 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:28:08.323618 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:28:08.323628 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:28:08.323637 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:28:08.323647 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:28:08.323680 | orchestrator | 2026-03-10 00:28:08.323690 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-10 00:28:08.323700 | orchestrator | 2026-03-10 00:28:08.323710 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-10 00:28:08.323720 | orchestrator | Tuesday 10 March 2026 00:27:30 +0000 (0:00:01.217) 0:00:02.608 ********* 2026-03-10 00:28:08.323729 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:08.323746 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:08.323763 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:08.323779 | orchestrator | 2026-03-10 00:28:08.323798 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-10 00:28:08.323881 | orchestrator | Tuesday 10 March 2026 00:27:30 +0000 (0:00:00.138) 0:00:02.746 ********* 2026-03-10 00:28:08.323900 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:08.323911 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:08.323923 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:08.323934 | orchestrator | 2026-03-10 00:28:08.323945 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-10 00:28:08.323956 | orchestrator | Tuesday 10 March 2026 00:27:30 +0000 (0:00:00.200) 0:00:02.946 ********* 2026-03-10 00:28:08.323968 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:08.323978 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:08.323989 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:08.324000 | orchestrator | 2026-03-10 00:28:08.324011 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-10 00:28:08.324022 | orchestrator | Tuesday 10 March 2026 00:27:31 +0000 (0:00:00.223) 0:00:03.169 ********* 2026-03-10 00:28:08.324035 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:28:08.324048 | orchestrator | 2026-03-10 00:28:08.324059 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-10 00:28:08.324087 | orchestrator | Tuesday 10 March 2026 00:27:31 +0000 (0:00:00.181) 0:00:03.351 ********* 2026-03-10 00:28:08.324123 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:08.324146 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:08.324161 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:08.324177 | orchestrator | 2026-03-10 00:28:08.324192 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-10 00:28:08.324207 | orchestrator | Tuesday 10 March 2026 00:27:31 +0000 (0:00:00.479) 0:00:03.830 ********* 2026-03-10 00:28:08.324222 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:28:08.324239 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:28:08.324255 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:28:08.324270 | orchestrator | 2026-03-10 00:28:08.324287 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-10 00:28:08.324305 | orchestrator | Tuesday 10 March 2026 00:27:32 +0000 (0:00:00.135) 0:00:03.966 ********* 2026-03-10 00:28:08.324321 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:28:08.324340 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:28:08.324352 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:28:08.324363 | orchestrator | 2026-03-10 00:28:08.324374 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-10 00:28:08.324385 | orchestrator | Tuesday 10 March 2026 00:27:33 +0000 (0:00:01.029) 0:00:04.995 ********* 2026-03-10 00:28:08.324396 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:08.324407 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:08.324418 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:08.324429 | orchestrator | 2026-03-10 00:28:08.324440 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-10 00:28:08.324451 | orchestrator | Tuesday 10 March 2026 00:27:33 +0000 (0:00:00.439) 0:00:05.435 ********* 2026-03-10 00:28:08.324462 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:28:08.324473 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:28:08.324484 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:28:08.324508 | orchestrator | 2026-03-10 00:28:08.324519 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-10 00:28:08.324530 | orchestrator | Tuesday 10 March 2026 00:27:34 +0000 (0:00:01.013) 0:00:06.449 ********* 2026-03-10 00:28:08.324541 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:28:08.324551 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:28:08.324562 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:28:08.324573 | orchestrator | 2026-03-10 00:28:08.324584 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-10 00:28:08.324595 | orchestrator | Tuesday 10 March 2026 00:27:50 +0000 (0:00:15.878) 0:00:22.327 ********* 2026-03-10 00:28:08.324606 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:28:08.324617 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:28:08.324628 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:28:08.324642 | orchestrator | 2026-03-10 00:28:08.324660 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-10 00:28:08.324702 | orchestrator | Tuesday 10 March 2026 00:27:50 +0000 (0:00:00.109) 0:00:22.436 ********* 2026-03-10 00:28:08.324723 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:28:08.324742 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:28:08.324760 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:28:08.324779 | orchestrator | 2026-03-10 00:28:08.324791 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-10 00:28:08.324802 | orchestrator | Tuesday 10 March 2026 00:27:59 +0000 (0:00:08.535) 0:00:30.972 ********* 2026-03-10 00:28:08.324813 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:08.324846 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:08.324857 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:08.324868 | orchestrator | 2026-03-10 00:28:08.324879 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-10 00:28:08.324891 | orchestrator | Tuesday 10 March 2026 00:27:59 +0000 (0:00:00.443) 0:00:31.416 ********* 2026-03-10 00:28:08.324902 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-10 00:28:08.324914 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-10 00:28:08.324925 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-10 00:28:08.324936 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-10 00:28:08.324947 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-10 00:28:08.324958 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-10 00:28:08.324969 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-10 00:28:08.324979 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-10 00:28:08.324990 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-10 00:28:08.325001 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-10 00:28:08.325012 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-10 00:28:08.325023 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-10 00:28:08.325034 | orchestrator | 2026-03-10 00:28:08.325045 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-10 00:28:08.325056 | orchestrator | Tuesday 10 March 2026 00:28:03 +0000 (0:00:03.564) 0:00:34.980 ********* 2026-03-10 00:28:08.325067 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:08.325078 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:08.325089 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:08.325100 | orchestrator | 2026-03-10 00:28:08.325111 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-10 00:28:08.325122 | orchestrator | 2026-03-10 00:28:08.325133 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-10 00:28:08.325144 | orchestrator | Tuesday 10 March 2026 00:28:04 +0000 (0:00:01.410) 0:00:36.391 ********* 2026-03-10 00:28:08.325164 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:28:08.325175 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:28:08.325186 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:28:08.325197 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:08.325208 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:08.325262 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:08.325274 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:08.325285 | orchestrator | 2026-03-10 00:28:08.325296 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:28:08.325309 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:28:08.325320 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:28:08.325333 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:28:08.325344 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:28:08.325355 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:28:08.325366 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:28:08.325377 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:28:08.325388 | orchestrator | 2026-03-10 00:28:08.325399 | orchestrator | 2026-03-10 00:28:08.325410 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:28:08.325422 | orchestrator | Tuesday 10 March 2026 00:28:08 +0000 (0:00:03.870) 0:00:40.262 ********* 2026-03-10 00:28:08.325433 | orchestrator | =============================================================================== 2026-03-10 00:28:08.325444 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.88s 2026-03-10 00:28:08.325455 | orchestrator | Install required packages (Debian) -------------------------------------- 8.54s 2026-03-10 00:28:08.325466 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.87s 2026-03-10 00:28:08.325477 | orchestrator | Copy fact files --------------------------------------------------------- 3.56s 2026-03-10 00:28:08.325488 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.41s 2026-03-10 00:28:08.325499 | orchestrator | Create custom facts directory ------------------------------------------- 1.31s 2026-03-10 00:28:08.325518 | orchestrator | Copy fact file ---------------------------------------------------------- 1.22s 2026-03-10 00:28:08.619206 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 1.03s 2026-03-10 00:28:08.619291 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.01s 2026-03-10 00:28:08.619296 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.48s 2026-03-10 00:28:08.619301 | orchestrator | Create custom facts directory ------------------------------------------- 0.44s 2026-03-10 00:28:08.619305 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-03-10 00:28:08.619309 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.22s 2026-03-10 00:28:08.619313 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.20s 2026-03-10 00:28:08.619317 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.18s 2026-03-10 00:28:08.619322 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.14s 2026-03-10 00:28:08.619326 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.14s 2026-03-10 00:28:08.619344 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2026-03-10 00:28:08.984937 | orchestrator | + osism apply bootstrap 2026-03-10 00:28:21.153491 | orchestrator | 2026-03-10 00:28:21 | INFO  | Prepare task for execution of bootstrap. 2026-03-10 00:28:21.236102 | orchestrator | 2026-03-10 00:28:21 | INFO  | Task 4fe68dd5-4df5-4c74-a9cc-b57584b4efb6 (bootstrap) was prepared for execution. 2026-03-10 00:28:21.236209 | orchestrator | 2026-03-10 00:28:21 | INFO  | It takes a moment until task 4fe68dd5-4df5-4c74-a9cc-b57584b4efb6 (bootstrap) has been started and output is visible here. 2026-03-10 00:28:39.544741 | orchestrator | 2026-03-10 00:28:39.544911 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-10 00:28:39.544931 | orchestrator | 2026-03-10 00:28:39.544943 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-10 00:28:39.544955 | orchestrator | Tuesday 10 March 2026 00:28:26 +0000 (0:00:00.161) 0:00:00.161 ********* 2026-03-10 00:28:39.544967 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:39.544978 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:39.544990 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:39.545010 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:39.545031 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:28:39.545051 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:28:39.545071 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:28:39.545084 | orchestrator | 2026-03-10 00:28:39.545096 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-10 00:28:39.545107 | orchestrator | 2026-03-10 00:28:39.545118 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-10 00:28:39.545129 | orchestrator | Tuesday 10 March 2026 00:28:26 +0000 (0:00:00.293) 0:00:00.455 ********* 2026-03-10 00:28:39.545141 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:28:39.545153 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:28:39.545164 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:28:39.545175 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:39.545186 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:39.545197 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:39.545207 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:39.545218 | orchestrator | 2026-03-10 00:28:39.545229 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-10 00:28:39.545240 | orchestrator | 2026-03-10 00:28:39.545252 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-10 00:28:39.545263 | orchestrator | Tuesday 10 March 2026 00:28:30 +0000 (0:00:04.327) 0:00:04.782 ********* 2026-03-10 00:28:39.545278 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 00:28:39.545297 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-10 00:28:39.545315 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 00:28:39.545334 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-10 00:28:39.545353 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-10 00:28:39.545372 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 00:28:39.545392 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-10 00:28:39.545404 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-10 00:28:39.545415 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-10 00:28:39.545425 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-10 00:28:39.545436 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-10 00:28:39.545447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-10 00:28:39.545458 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-10 00:28:39.545469 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-10 00:28:39.545480 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-10 00:28:39.545518 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-10 00:28:39.545530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-10 00:28:39.545541 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-10 00:28:39.545551 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:28:39.545563 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-10 00:28:39.545573 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-10 00:28:39.545584 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-10 00:28:39.545595 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-10 00:28:39.545605 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:28:39.545616 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-10 00:28:39.545627 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-10 00:28:39.545638 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-10 00:28:39.545663 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-10 00:28:39.545674 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:28:39.545685 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-10 00:28:39.545696 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-10 00:28:39.545706 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-10 00:28:39.545717 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-10 00:28:39.545728 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-10 00:28:39.545739 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-10 00:28:39.545749 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-10 00:28:39.545760 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-10 00:28:39.545771 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:28:39.545781 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-10 00:28:39.545793 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-10 00:28:39.545915 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-10 00:28:39.545935 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-10 00:28:39.545953 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-10 00:28:39.545971 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-10 00:28:39.545987 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-10 00:28:39.546005 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-10 00:28:39.546123 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-10 00:28:39.546145 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:28:39.546157 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-10 00:28:39.546168 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-10 00:28:39.546179 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-10 00:28:39.546189 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-10 00:28:39.546200 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-10 00:28:39.546211 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-10 00:28:39.546221 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:28:39.546232 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:28:39.546243 | orchestrator | 2026-03-10 00:28:39.546254 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-10 00:28:39.546264 | orchestrator | 2026-03-10 00:28:39.546275 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-10 00:28:39.546286 | orchestrator | Tuesday 10 March 2026 00:28:31 +0000 (0:00:00.494) 0:00:05.276 ********* 2026-03-10 00:28:39.546297 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:28:39.546321 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:39.546332 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:39.546343 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:28:39.546353 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:28:39.546364 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:39.546375 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:39.546385 | orchestrator | 2026-03-10 00:28:39.546396 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-10 00:28:39.546407 | orchestrator | Tuesday 10 March 2026 00:28:33 +0000 (0:00:02.300) 0:00:07.577 ********* 2026-03-10 00:28:39.546418 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:39.546429 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:28:39.546440 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:39.546451 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:28:39.546461 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:28:39.546472 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:39.546483 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:39.546494 | orchestrator | 2026-03-10 00:28:39.546505 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-10 00:28:39.546515 | orchestrator | Tuesday 10 March 2026 00:28:34 +0000 (0:00:01.312) 0:00:08.889 ********* 2026-03-10 00:28:39.546527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:28:39.546540 | orchestrator | 2026-03-10 00:28:39.546551 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-10 00:28:39.546562 | orchestrator | Tuesday 10 March 2026 00:28:35 +0000 (0:00:00.299) 0:00:09.188 ********* 2026-03-10 00:28:39.546573 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:28:39.546584 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:28:39.546595 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:28:39.546606 | orchestrator | changed: [testbed-manager] 2026-03-10 00:28:39.546616 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:28:39.546627 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:28:39.546638 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:28:39.546649 | orchestrator | 2026-03-10 00:28:39.546660 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-10 00:28:39.546670 | orchestrator | Tuesday 10 March 2026 00:28:36 +0000 (0:00:01.889) 0:00:11.077 ********* 2026-03-10 00:28:39.546681 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:28:39.546693 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:28:39.546706 | orchestrator | 2026-03-10 00:28:39.546717 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-10 00:28:39.546728 | orchestrator | Tuesday 10 March 2026 00:28:37 +0000 (0:00:00.265) 0:00:11.343 ********* 2026-03-10 00:28:39.546739 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:28:39.546750 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:28:39.546760 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:28:39.546771 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:28:39.546782 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:28:39.546832 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:28:39.546851 | orchestrator | 2026-03-10 00:28:39.546862 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-10 00:28:39.546873 | orchestrator | Tuesday 10 March 2026 00:28:38 +0000 (0:00:01.006) 0:00:12.350 ********* 2026-03-10 00:28:39.546884 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:28:39.546895 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:28:39.546905 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:28:39.546916 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:28:39.546927 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:28:39.546938 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:28:39.546956 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:28:39.546967 | orchestrator | 2026-03-10 00:28:39.546978 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-10 00:28:39.546993 | orchestrator | Tuesday 10 March 2026 00:28:38 +0000 (0:00:00.584) 0:00:12.935 ********* 2026-03-10 00:28:39.547012 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:28:39.547030 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:28:39.547047 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:28:39.547066 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:28:39.547083 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:28:39.547101 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:28:39.547118 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:39.547137 | orchestrator | 2026-03-10 00:28:39.547156 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-10 00:28:39.547175 | orchestrator | Tuesday 10 March 2026 00:28:39 +0000 (0:00:00.549) 0:00:13.485 ********* 2026-03-10 00:28:39.547195 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:28:39.547212 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:28:39.547241 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:28:51.518590 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:28:51.518690 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:28:51.518702 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:28:51.518712 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:28:51.518721 | orchestrator | 2026-03-10 00:28:51.518731 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-10 00:28:51.518742 | orchestrator | Tuesday 10 March 2026 00:28:39 +0000 (0:00:00.232) 0:00:13.717 ********* 2026-03-10 00:28:51.518753 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:28:51.518777 | orchestrator | 2026-03-10 00:28:51.518787 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-10 00:28:51.518840 | orchestrator | Tuesday 10 March 2026 00:28:39 +0000 (0:00:00.326) 0:00:14.043 ********* 2026-03-10 00:28:51.518852 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:28:51.518861 | orchestrator | 2026-03-10 00:28:51.518870 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-10 00:28:51.518879 | orchestrator | Tuesday 10 March 2026 00:28:40 +0000 (0:00:00.433) 0:00:14.477 ********* 2026-03-10 00:28:51.518888 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:28:51.518897 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:51.518906 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:28:51.518915 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:28:51.518923 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:51.518932 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:51.518941 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:51.518949 | orchestrator | 2026-03-10 00:28:51.518958 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-10 00:28:51.518967 | orchestrator | Tuesday 10 March 2026 00:28:41 +0000 (0:00:01.236) 0:00:15.714 ********* 2026-03-10 00:28:51.518976 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:28:51.518985 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:28:51.518993 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:28:51.519002 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:28:51.519011 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:28:51.519019 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:28:51.519028 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:28:51.519037 | orchestrator | 2026-03-10 00:28:51.519045 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-10 00:28:51.519080 | orchestrator | Tuesday 10 March 2026 00:28:41 +0000 (0:00:00.238) 0:00:15.952 ********* 2026-03-10 00:28:51.519089 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:51.519100 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:51.519110 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:51.519119 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:28:51.519128 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:28:51.519138 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:51.519147 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:28:51.519157 | orchestrator | 2026-03-10 00:28:51.519167 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-10 00:28:51.519177 | orchestrator | Tuesday 10 March 2026 00:28:42 +0000 (0:00:00.564) 0:00:16.517 ********* 2026-03-10 00:28:51.519187 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:28:51.519197 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:28:51.519207 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:28:51.519217 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:28:51.519226 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:28:51.519235 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:28:51.519245 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:28:51.519255 | orchestrator | 2026-03-10 00:28:51.519265 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-10 00:28:51.519277 | orchestrator | Tuesday 10 March 2026 00:28:42 +0000 (0:00:00.264) 0:00:16.781 ********* 2026-03-10 00:28:51.519286 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:28:51.519305 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:28:51.519316 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:28:51.519325 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:28:51.519335 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:51.519344 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:28:51.519354 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:28:51.519363 | orchestrator | 2026-03-10 00:28:51.519373 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-10 00:28:51.519383 | orchestrator | Tuesday 10 March 2026 00:28:43 +0000 (0:00:00.552) 0:00:17.334 ********* 2026-03-10 00:28:51.519393 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:51.519402 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:28:51.519412 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:28:51.519421 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:28:51.519431 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:28:51.519441 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:28:51.519451 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:28:51.519460 | orchestrator | 2026-03-10 00:28:51.519469 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-10 00:28:51.519478 | orchestrator | Tuesday 10 March 2026 00:28:44 +0000 (0:00:01.082) 0:00:18.417 ********* 2026-03-10 00:28:51.519487 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:28:51.519495 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:51.519504 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:51.519513 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:28:51.519522 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:51.519530 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:28:51.519538 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:51.519547 | orchestrator | 2026-03-10 00:28:51.519556 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-10 00:28:51.519564 | orchestrator | Tuesday 10 March 2026 00:28:45 +0000 (0:00:01.014) 0:00:19.431 ********* 2026-03-10 00:28:51.519591 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:28:51.519601 | orchestrator | 2026-03-10 00:28:51.519610 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-10 00:28:51.519626 | orchestrator | Tuesday 10 March 2026 00:28:45 +0000 (0:00:00.354) 0:00:19.785 ********* 2026-03-10 00:28:51.519635 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:28:51.519643 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:28:51.519652 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:28:51.519660 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:28:51.519668 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:28:51.519677 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:28:51.519685 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:28:51.519694 | orchestrator | 2026-03-10 00:28:51.519702 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-10 00:28:51.519711 | orchestrator | Tuesday 10 March 2026 00:28:46 +0000 (0:00:01.283) 0:00:21.069 ********* 2026-03-10 00:28:51.519720 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:51.519728 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:51.519737 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:51.519745 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:51.519754 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:28:51.519762 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:28:51.519770 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:28:51.519779 | orchestrator | 2026-03-10 00:28:51.519787 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-10 00:28:51.519796 | orchestrator | Tuesday 10 March 2026 00:28:47 +0000 (0:00:00.255) 0:00:21.324 ********* 2026-03-10 00:28:51.519823 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:51.519831 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:51.519840 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:51.519849 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:51.519857 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:28:51.519866 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:28:51.519874 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:28:51.519882 | orchestrator | 2026-03-10 00:28:51.519891 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-10 00:28:51.519900 | orchestrator | Tuesday 10 March 2026 00:28:47 +0000 (0:00:00.241) 0:00:21.566 ********* 2026-03-10 00:28:51.519909 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:51.519917 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:51.519926 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:51.519934 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:51.519943 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:28:51.519951 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:28:51.519960 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:28:51.519968 | orchestrator | 2026-03-10 00:28:51.519977 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-10 00:28:51.519986 | orchestrator | Tuesday 10 March 2026 00:28:47 +0000 (0:00:00.239) 0:00:21.805 ********* 2026-03-10 00:28:51.519995 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:28:51.520005 | orchestrator | 2026-03-10 00:28:51.520014 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-10 00:28:51.520023 | orchestrator | Tuesday 10 March 2026 00:28:48 +0000 (0:00:00.362) 0:00:22.168 ********* 2026-03-10 00:28:51.520031 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:51.520040 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:51.520048 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:51.520056 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:51.520065 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:28:51.520073 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:28:51.520082 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:28:51.520090 | orchestrator | 2026-03-10 00:28:51.520099 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-10 00:28:51.520111 | orchestrator | Tuesday 10 March 2026 00:28:48 +0000 (0:00:00.560) 0:00:22.729 ********* 2026-03-10 00:28:51.520126 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:28:51.520145 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:28:51.520154 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:28:51.520163 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:28:51.520171 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:28:51.520180 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:28:51.520188 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:28:51.520197 | orchestrator | 2026-03-10 00:28:51.520206 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-10 00:28:51.520214 | orchestrator | Tuesday 10 March 2026 00:28:48 +0000 (0:00:00.234) 0:00:22.964 ********* 2026-03-10 00:28:51.520223 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:51.520231 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:51.520240 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:51.520249 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:51.520257 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:28:51.520265 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:28:51.520278 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:28:51.520292 | orchestrator | 2026-03-10 00:28:51.520307 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-10 00:28:51.520320 | orchestrator | Tuesday 10 March 2026 00:28:49 +0000 (0:00:01.058) 0:00:24.022 ********* 2026-03-10 00:28:51.520333 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:51.520347 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:51.520361 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:51.520373 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:51.520386 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:28:51.520400 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:28:51.520414 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:28:51.520429 | orchestrator | 2026-03-10 00:28:51.520442 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-10 00:28:51.520457 | orchestrator | Tuesday 10 March 2026 00:28:50 +0000 (0:00:00.541) 0:00:24.564 ********* 2026-03-10 00:28:51.520472 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:28:51.520487 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:28:51.520496 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:28:51.520504 | orchestrator | ok: [testbed-manager] 2026-03-10 00:28:51.520520 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:29:33.447481 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:29:33.447561 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:29:33.447568 | orchestrator | 2026-03-10 00:29:33.447573 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-10 00:29:33.447579 | orchestrator | Tuesday 10 March 2026 00:28:51 +0000 (0:00:01.141) 0:00:25.706 ********* 2026-03-10 00:29:33.447583 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:33.447588 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:33.447592 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:33.447596 | orchestrator | changed: [testbed-manager] 2026-03-10 00:29:33.447600 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:29:33.447604 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:29:33.447607 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:29:33.447611 | orchestrator | 2026-03-10 00:29:33.447615 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-10 00:29:33.447620 | orchestrator | Tuesday 10 March 2026 00:29:07 +0000 (0:00:15.577) 0:00:41.283 ********* 2026-03-10 00:29:33.447624 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:33.447628 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:33.447632 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:33.447636 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:33.447639 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:33.447643 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:33.447647 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:33.447651 | orchestrator | 2026-03-10 00:29:33.447654 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-10 00:29:33.447658 | orchestrator | Tuesday 10 March 2026 00:29:07 +0000 (0:00:00.231) 0:00:41.515 ********* 2026-03-10 00:29:33.447679 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:33.447683 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:33.447687 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:33.447690 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:33.447694 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:33.447698 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:33.447701 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:33.447705 | orchestrator | 2026-03-10 00:29:33.447709 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-10 00:29:33.447712 | orchestrator | Tuesday 10 March 2026 00:29:07 +0000 (0:00:00.253) 0:00:41.769 ********* 2026-03-10 00:29:33.447716 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:33.447720 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:33.447723 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:33.447727 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:33.447731 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:33.447734 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:33.447738 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:33.447742 | orchestrator | 2026-03-10 00:29:33.447746 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-10 00:29:33.447749 | orchestrator | Tuesday 10 March 2026 00:29:07 +0000 (0:00:00.264) 0:00:42.033 ********* 2026-03-10 00:29:33.447755 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:29:33.447761 | orchestrator | 2026-03-10 00:29:33.447765 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-10 00:29:33.447768 | orchestrator | Tuesday 10 March 2026 00:29:08 +0000 (0:00:00.325) 0:00:42.358 ********* 2026-03-10 00:29:33.447772 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:33.447776 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:33.447779 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:33.447816 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:33.447832 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:33.447836 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:33.447840 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:33.447843 | orchestrator | 2026-03-10 00:29:33.447847 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-10 00:29:33.447851 | orchestrator | Tuesday 10 March 2026 00:29:09 +0000 (0:00:01.684) 0:00:44.043 ********* 2026-03-10 00:29:33.447855 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:29:33.447859 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:29:33.447863 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:29:33.447866 | orchestrator | changed: [testbed-manager] 2026-03-10 00:29:33.447870 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:29:33.447876 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:29:33.447880 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:29:33.447884 | orchestrator | 2026-03-10 00:29:33.447888 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-10 00:29:33.447891 | orchestrator | Tuesday 10 March 2026 00:29:11 +0000 (0:00:01.130) 0:00:45.173 ********* 2026-03-10 00:29:33.447895 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:33.447899 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:33.447903 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:33.447906 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:33.447910 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:33.447914 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:33.447917 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:33.447921 | orchestrator | 2026-03-10 00:29:33.447925 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-10 00:29:33.447929 | orchestrator | Tuesday 10 March 2026 00:29:11 +0000 (0:00:00.894) 0:00:46.068 ********* 2026-03-10 00:29:33.447933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:29:33.447942 | orchestrator | 2026-03-10 00:29:33.447946 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-10 00:29:33.447950 | orchestrator | Tuesday 10 March 2026 00:29:12 +0000 (0:00:00.346) 0:00:46.414 ********* 2026-03-10 00:29:33.447954 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:29:33.447958 | orchestrator | changed: [testbed-manager] 2026-03-10 00:29:33.447961 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:29:33.447965 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:29:33.447969 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:29:33.447973 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:29:33.447976 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:29:33.447980 | orchestrator | 2026-03-10 00:29:33.447994 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-10 00:29:33.447998 | orchestrator | Tuesday 10 March 2026 00:29:13 +0000 (0:00:01.057) 0:00:47.472 ********* 2026-03-10 00:29:33.448002 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:29:33.448005 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:29:33.448009 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:29:33.448013 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:29:33.448016 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:29:33.448020 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:29:33.448024 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:29:33.448028 | orchestrator | 2026-03-10 00:29:33.448031 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-10 00:29:33.448035 | orchestrator | Tuesday 10 March 2026 00:29:13 +0000 (0:00:00.243) 0:00:47.715 ********* 2026-03-10 00:29:33.448039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:29:33.448043 | orchestrator | 2026-03-10 00:29:33.448048 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-10 00:29:33.448052 | orchestrator | Tuesday 10 March 2026 00:29:13 +0000 (0:00:00.352) 0:00:48.068 ********* 2026-03-10 00:29:33.448056 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:33.448061 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:33.448065 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:33.448069 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:33.448074 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:33.448078 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:33.448082 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:33.448086 | orchestrator | 2026-03-10 00:29:33.448090 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-10 00:29:33.448095 | orchestrator | Tuesday 10 March 2026 00:29:15 +0000 (0:00:01.643) 0:00:49.712 ********* 2026-03-10 00:29:33.448099 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:29:33.448103 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:29:33.448107 | orchestrator | changed: [testbed-manager] 2026-03-10 00:29:33.448111 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:29:33.448116 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:29:33.448120 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:29:33.448124 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:29:33.448128 | orchestrator | 2026-03-10 00:29:33.448133 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-10 00:29:33.448137 | orchestrator | Tuesday 10 March 2026 00:29:16 +0000 (0:00:01.201) 0:00:50.913 ********* 2026-03-10 00:29:33.448141 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:29:33.448145 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:29:33.448150 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:29:33.448154 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:29:33.448158 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:29:33.448162 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:29:33.448170 | orchestrator | changed: [testbed-manager] 2026-03-10 00:29:33.448174 | orchestrator | 2026-03-10 00:29:33.448178 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-10 00:29:33.448183 | orchestrator | Tuesday 10 March 2026 00:29:30 +0000 (0:00:13.880) 0:01:04.794 ********* 2026-03-10 00:29:33.448187 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:33.448191 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:33.448195 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:33.448199 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:33.448204 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:33.448208 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:33.448212 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:33.448216 | orchestrator | 2026-03-10 00:29:33.448221 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-10 00:29:33.448225 | orchestrator | Tuesday 10 March 2026 00:29:31 +0000 (0:00:00.965) 0:01:05.759 ********* 2026-03-10 00:29:33.448229 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:33.448234 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:33.448238 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:33.448242 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:33.448246 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:33.448250 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:33.448255 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:33.448259 | orchestrator | 2026-03-10 00:29:33.448266 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-10 00:29:33.448270 | orchestrator | Tuesday 10 March 2026 00:29:32 +0000 (0:00:00.974) 0:01:06.733 ********* 2026-03-10 00:29:33.448274 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:33.448279 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:33.448283 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:33.448287 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:33.448291 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:33.448296 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:33.448300 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:33.448304 | orchestrator | 2026-03-10 00:29:33.448308 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-10 00:29:33.448313 | orchestrator | Tuesday 10 March 2026 00:29:32 +0000 (0:00:00.246) 0:01:06.980 ********* 2026-03-10 00:29:33.448317 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:29:33.448321 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:29:33.448326 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:29:33.448330 | orchestrator | ok: [testbed-manager] 2026-03-10 00:29:33.448334 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:29:33.448339 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:29:33.448343 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:29:33.448347 | orchestrator | 2026-03-10 00:29:33.448352 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-10 00:29:33.448356 | orchestrator | Tuesday 10 March 2026 00:29:33 +0000 (0:00:00.247) 0:01:07.227 ********* 2026-03-10 00:29:33.448360 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:29:33.448365 | orchestrator | 2026-03-10 00:29:33.448372 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-10 00:31:57.337174 | orchestrator | Tuesday 10 March 2026 00:29:33 +0000 (0:00:00.302) 0:01:07.530 ********* 2026-03-10 00:31:57.337325 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:57.337354 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:57.337374 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:57.337393 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:57.337415 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:57.337435 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:57.337454 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:57.337465 | orchestrator | 2026-03-10 00:31:57.337477 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-10 00:31:57.337519 | orchestrator | Tuesday 10 March 2026 00:29:35 +0000 (0:00:01.735) 0:01:09.266 ********* 2026-03-10 00:31:57.337531 | orchestrator | changed: [testbed-manager] 2026-03-10 00:31:57.337543 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:31:57.337554 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:31:57.337565 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:31:57.337575 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:31:57.337586 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:31:57.337597 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:31:57.337608 | orchestrator | 2026-03-10 00:31:57.337619 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-10 00:31:57.337649 | orchestrator | Tuesday 10 March 2026 00:29:35 +0000 (0:00:00.525) 0:01:09.792 ********* 2026-03-10 00:31:57.337662 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:57.337676 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:57.337688 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:57.337700 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:57.337712 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:57.337725 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:57.337801 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:57.337818 | orchestrator | 2026-03-10 00:31:57.337836 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-10 00:31:57.337854 | orchestrator | Tuesday 10 March 2026 00:29:35 +0000 (0:00:00.218) 0:01:10.011 ********* 2026-03-10 00:31:57.337872 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:57.337889 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:57.337901 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:57.337914 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:57.337926 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:57.337938 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:57.337949 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:57.337962 | orchestrator | 2026-03-10 00:31:57.337974 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-10 00:31:57.337987 | orchestrator | Tuesday 10 March 2026 00:29:37 +0000 (0:00:01.218) 0:01:11.229 ********* 2026-03-10 00:31:57.337999 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:31:57.338011 | orchestrator | changed: [testbed-manager] 2026-03-10 00:31:57.338085 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:31:57.338097 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:31:57.338108 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:31:57.338118 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:31:57.338129 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:31:57.338140 | orchestrator | 2026-03-10 00:31:57.338150 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-10 00:31:57.338162 | orchestrator | Tuesday 10 March 2026 00:29:38 +0000 (0:00:01.831) 0:01:13.061 ********* 2026-03-10 00:31:57.338172 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:57.338183 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:57.338194 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:57.338205 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:57.338216 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:57.338226 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:57.338237 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:57.338247 | orchestrator | 2026-03-10 00:31:57.338258 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-10 00:31:57.338269 | orchestrator | Tuesday 10 March 2026 00:29:41 +0000 (0:00:02.285) 0:01:15.346 ********* 2026-03-10 00:31:57.338280 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:57.338290 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:57.338301 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:57.338311 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:57.338322 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:57.338332 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:57.338342 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:57.338368 | orchestrator | 2026-03-10 00:31:57.338387 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-10 00:31:57.338423 | orchestrator | Tuesday 10 March 2026 00:30:13 +0000 (0:00:32.268) 0:01:47.614 ********* 2026-03-10 00:31:57.338442 | orchestrator | changed: [testbed-manager] 2026-03-10 00:31:57.338459 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:31:57.338477 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:31:57.338497 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:31:57.338515 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:31:57.338533 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:31:57.338552 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:31:57.338569 | orchestrator | 2026-03-10 00:31:57.338589 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-10 00:31:57.338608 | orchestrator | Tuesday 10 March 2026 00:31:40 +0000 (0:01:27.029) 0:03:14.644 ********* 2026-03-10 00:31:57.338627 | orchestrator | ok: [testbed-manager] 2026-03-10 00:31:57.338640 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:57.338651 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:57.338662 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:57.338672 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:57.338683 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:57.338694 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:57.338704 | orchestrator | 2026-03-10 00:31:57.338715 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-10 00:31:57.338726 | orchestrator | Tuesday 10 March 2026 00:31:42 +0000 (0:00:01.788) 0:03:16.432 ********* 2026-03-10 00:31:57.338764 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:31:57.338776 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:31:57.338787 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:31:57.338798 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:31:57.338808 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:31:57.338819 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:31:57.338830 | orchestrator | changed: [testbed-manager] 2026-03-10 00:31:57.338840 | orchestrator | 2026-03-10 00:31:57.338851 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-10 00:31:57.338862 | orchestrator | Tuesday 10 March 2026 00:31:56 +0000 (0:00:13.779) 0:03:30.211 ********* 2026-03-10 00:31:57.338908 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-10 00:31:57.338933 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-10 00:31:57.338948 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-10 00:31:57.338961 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-10 00:31:57.338985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-10 00:31:57.339000 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-10 00:31:57.339012 | orchestrator | 2026-03-10 00:31:57.339023 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-10 00:31:57.339034 | orchestrator | Tuesday 10 March 2026 00:31:56 +0000 (0:00:00.424) 0:03:30.636 ********* 2026-03-10 00:31:57.339044 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-10 00:31:57.339055 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:31:57.339067 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-10 00:31:57.339078 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-10 00:31:57.339088 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:31:57.339099 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-10 00:31:57.339110 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:31:57.339121 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:31:57.339132 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-10 00:31:57.339150 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-10 00:31:57.339162 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-10 00:31:57.339173 | orchestrator | 2026-03-10 00:31:57.339183 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-10 00:31:57.339194 | orchestrator | Tuesday 10 March 2026 00:31:57 +0000 (0:00:00.708) 0:03:31.344 ********* 2026-03-10 00:31:57.339204 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-10 00:31:57.339216 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-10 00:31:57.339227 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-10 00:31:57.339238 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-10 00:31:57.339249 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-10 00:31:57.339266 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-10 00:32:04.333472 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-10 00:32:04.333635 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-10 00:32:04.333662 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-10 00:32:04.333683 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-10 00:32:04.333705 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-10 00:32:04.333724 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-10 00:32:04.333773 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-10 00:32:04.333827 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-10 00:32:04.333847 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-10 00:32:04.333865 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-10 00:32:04.333884 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-10 00:32:04.333903 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-10 00:32:04.333923 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-10 00:32:04.334118 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-10 00:32:04.334146 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-10 00:32:04.334164 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-10 00:32:04.334183 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-10 00:32:04.334203 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-10 00:32:04.334223 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-10 00:32:04.334244 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-10 00:32:04.334262 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-10 00:32:04.334282 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-10 00:32:04.334303 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-10 00:32:04.334324 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-10 00:32:04.334345 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:32:04.334366 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-10 00:32:04.334387 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:32:04.334407 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-10 00:32:04.334429 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-10 00:32:04.334473 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-10 00:32:04.334494 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-10 00:32:04.334513 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-10 00:32:04.334531 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-10 00:32:04.334551 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-10 00:32:04.334572 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-10 00:32:04.334592 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-10 00:32:04.334610 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:32:04.334630 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:32:04.334650 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-10 00:32:04.334670 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-10 00:32:04.334692 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-10 00:32:04.334732 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-10 00:32:04.334781 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-10 00:32:04.334830 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-10 00:32:04.334849 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-10 00:32:04.334867 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-10 00:32:04.334885 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-10 00:32:04.334903 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-10 00:32:04.334920 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-10 00:32:04.334938 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-10 00:32:04.334956 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-10 00:32:04.334973 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-10 00:32:04.334992 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-10 00:32:04.335010 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-10 00:32:04.335028 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-10 00:32:04.335045 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-10 00:32:04.335063 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-10 00:32:04.335080 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-10 00:32:04.335099 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-10 00:32:04.335115 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-10 00:32:04.335132 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-10 00:32:04.335149 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-10 00:32:04.335166 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-10 00:32:04.335185 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-10 00:32:04.335203 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-10 00:32:04.335222 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-10 00:32:04.335240 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-10 00:32:04.335257 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-10 00:32:04.335275 | orchestrator | 2026-03-10 00:32:04.335294 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-10 00:32:04.335313 | orchestrator | Tuesday 10 March 2026 00:32:03 +0000 (0:00:05.945) 0:03:37.289 ********* 2026-03-10 00:32:04.335331 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:32:04.335349 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:32:04.335378 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:32:04.335390 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:32:04.335414 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:32:04.335425 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:32:04.335435 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-10 00:32:04.335446 | orchestrator | 2026-03-10 00:32:04.335458 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-10 00:32:04.335468 | orchestrator | Tuesday 10 March 2026 00:32:03 +0000 (0:00:00.639) 0:03:37.928 ********* 2026-03-10 00:32:04.335479 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:32:04.335491 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:32:04.335502 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:32:04.335513 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:32:04.335524 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:32:04.335535 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:32:04.335546 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:32:04.335557 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:32:04.335567 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-10 00:32:04.335579 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-10 00:32:04.335609 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-10 00:32:19.126511 | orchestrator | 2026-03-10 00:32:19.126630 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-10 00:32:19.126644 | orchestrator | Tuesday 10 March 2026 00:32:04 +0000 (0:00:00.512) 0:03:38.441 ********* 2026-03-10 00:32:19.126652 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:32:19.126660 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:32:19.126669 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:32:19.126676 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:32:19.126683 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:32:19.126689 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:32:19.126696 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-10 00:32:19.126703 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:32:19.126710 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-10 00:32:19.126717 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-10 00:32:19.126724 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-10 00:32:19.126751 | orchestrator | 2026-03-10 00:32:19.126761 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-10 00:32:19.126768 | orchestrator | Tuesday 10 March 2026 00:32:05 +0000 (0:00:01.634) 0:03:40.076 ********* 2026-03-10 00:32:19.126775 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-10 00:32:19.126782 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-10 00:32:19.126788 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:32:19.126795 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-10 00:32:19.126820 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:32:19.126827 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:32:19.126834 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-10 00:32:19.126841 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:32:19.126848 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-10 00:32:19.126855 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-10 00:32:19.126861 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-10 00:32:19.126868 | orchestrator | 2026-03-10 00:32:19.126875 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-10 00:32:19.126882 | orchestrator | Tuesday 10 March 2026 00:32:06 +0000 (0:00:00.541) 0:03:40.617 ********* 2026-03-10 00:32:19.126888 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:32:19.126896 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:32:19.126902 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:32:19.126909 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:32:19.126916 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:32:19.126923 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:32:19.126929 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:32:19.126936 | orchestrator | 2026-03-10 00:32:19.126943 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-10 00:32:19.126950 | orchestrator | Tuesday 10 March 2026 00:32:06 +0000 (0:00:00.348) 0:03:40.966 ********* 2026-03-10 00:32:19.126957 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:32:19.126965 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:32:19.126971 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:19.126978 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:32:19.126985 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:32:19.126991 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:32:19.126998 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:32:19.127005 | orchestrator | 2026-03-10 00:32:19.127011 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-10 00:32:19.127018 | orchestrator | Tuesday 10 March 2026 00:32:12 +0000 (0:00:05.784) 0:03:46.751 ********* 2026-03-10 00:32:19.127025 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-10 00:32:19.127032 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:32:19.127040 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-10 00:32:19.127048 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:32:19.127056 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-10 00:32:19.127063 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:32:19.127071 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-10 00:32:19.127080 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-10 00:32:19.127088 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:32:19.127096 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-10 00:32:19.127104 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:32:19.127112 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:32:19.127121 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-10 00:32:19.127129 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:32:19.127136 | orchestrator | 2026-03-10 00:32:19.127144 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-10 00:32:19.127151 | orchestrator | Tuesday 10 March 2026 00:32:13 +0000 (0:00:00.445) 0:03:47.196 ********* 2026-03-10 00:32:19.127158 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-10 00:32:19.127166 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-10 00:32:19.127173 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-10 00:32:19.127195 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-10 00:32:19.127203 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-10 00:32:19.127210 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-10 00:32:19.127224 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-10 00:32:19.127231 | orchestrator | 2026-03-10 00:32:19.127239 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-10 00:32:19.127246 | orchestrator | Tuesday 10 March 2026 00:32:14 +0000 (0:00:01.814) 0:03:49.010 ********* 2026-03-10 00:32:19.127255 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:32:19.127264 | orchestrator | 2026-03-10 00:32:19.127272 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-10 00:32:19.127279 | orchestrator | Tuesday 10 March 2026 00:32:15 +0000 (0:00:00.478) 0:03:49.488 ********* 2026-03-10 00:32:19.127286 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:32:19.127294 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:32:19.127301 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:19.127308 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:32:19.127315 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:32:19.127322 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:32:19.127329 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:32:19.127337 | orchestrator | 2026-03-10 00:32:19.127344 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-10 00:32:19.127351 | orchestrator | Tuesday 10 March 2026 00:32:16 +0000 (0:00:01.222) 0:03:50.711 ********* 2026-03-10 00:32:19.127358 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:32:19.127366 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:32:19.127373 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:32:19.127380 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:19.127387 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:32:19.127394 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:32:19.127401 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:32:19.127408 | orchestrator | 2026-03-10 00:32:19.127415 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-10 00:32:19.127422 | orchestrator | Tuesday 10 March 2026 00:32:17 +0000 (0:00:00.622) 0:03:51.334 ********* 2026-03-10 00:32:19.127430 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:19.127452 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:19.127460 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:19.127467 | orchestrator | changed: [testbed-manager] 2026-03-10 00:32:19.127474 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:19.127481 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:19.127488 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:19.127496 | orchestrator | 2026-03-10 00:32:19.127503 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-10 00:32:19.127510 | orchestrator | Tuesday 10 March 2026 00:32:17 +0000 (0:00:00.695) 0:03:52.029 ********* 2026-03-10 00:32:19.127517 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:19.127524 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:32:19.127532 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:32:19.127539 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:32:19.127546 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:32:19.127553 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:32:19.127560 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:32:19.127568 | orchestrator | 2026-03-10 00:32:19.127575 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-10 00:32:19.127584 | orchestrator | Tuesday 10 March 2026 00:32:18 +0000 (0:00:00.652) 0:03:52.682 ********* 2026-03-10 00:32:19.127688 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101119.5704474, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:19.127719 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101145.4956918, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:19.127804 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101148.7838633, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:19.127849 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101150.0859435, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:24.596327 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101143.7896829, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:24.596423 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101146.1930685, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:24.596432 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773101132.4011645, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:24.596454 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:24.596477 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:24.596483 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:24.596489 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:24.596542 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:24.596549 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:24.596556 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 00:32:24.596562 | orchestrator | 2026-03-10 00:32:24.596570 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-10 00:32:24.596577 | orchestrator | Tuesday 10 March 2026 00:32:19 +0000 (0:00:01.032) 0:03:53.714 ********* 2026-03-10 00:32:24.596584 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:24.596591 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:24.596603 | orchestrator | changed: [testbed-manager] 2026-03-10 00:32:24.596608 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:24.596615 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:24.596620 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:24.596626 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:24.596632 | orchestrator | 2026-03-10 00:32:24.596638 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-10 00:32:24.596644 | orchestrator | Tuesday 10 March 2026 00:32:20 +0000 (0:00:01.122) 0:03:54.837 ********* 2026-03-10 00:32:24.596649 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:24.596654 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:24.596660 | orchestrator | changed: [testbed-manager] 2026-03-10 00:32:24.596670 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:24.596676 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:24.596682 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:24.596687 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:24.596693 | orchestrator | 2026-03-10 00:32:24.596699 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-10 00:32:24.596705 | orchestrator | Tuesday 10 March 2026 00:32:21 +0000 (0:00:01.148) 0:03:55.986 ********* 2026-03-10 00:32:24.596711 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:32:24.596716 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:32:24.596722 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:32:24.596750 | orchestrator | changed: [testbed-manager] 2026-03-10 00:32:24.596757 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:32:24.596763 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:32:24.596769 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:32:24.596774 | orchestrator | 2026-03-10 00:32:24.596780 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-10 00:32:24.596786 | orchestrator | Tuesday 10 March 2026 00:32:23 +0000 (0:00:01.151) 0:03:57.137 ********* 2026-03-10 00:32:24.596792 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:32:24.596798 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:32:24.596803 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:32:24.596809 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:32:24.596815 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:32:24.596821 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:32:24.596827 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:32:24.596833 | orchestrator | 2026-03-10 00:32:24.596839 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-10 00:32:24.596845 | orchestrator | Tuesday 10 March 2026 00:32:23 +0000 (0:00:00.301) 0:03:57.439 ********* 2026-03-10 00:32:24.596851 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:32:24.596858 | orchestrator | ok: [testbed-manager] 2026-03-10 00:32:24.596864 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:32:24.596870 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:32:24.596875 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:32:24.596881 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:32:24.596887 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:32:24.596893 | orchestrator | 2026-03-10 00:32:24.596899 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-10 00:32:24.596905 | orchestrator | Tuesday 10 March 2026 00:32:24 +0000 (0:00:00.834) 0:03:58.273 ********* 2026-03-10 00:32:24.596912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:32:24.596919 | orchestrator | 2026-03-10 00:32:24.596925 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-10 00:32:24.596936 | orchestrator | Tuesday 10 March 2026 00:32:24 +0000 (0:00:00.408) 0:03:58.681 ********* 2026-03-10 00:33:41.161231 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:41.161313 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:41.161321 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:41.161345 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:41.161351 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:41.161355 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:41.161360 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:41.161365 | orchestrator | 2026-03-10 00:33:41.161371 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-10 00:33:41.161377 | orchestrator | Tuesday 10 March 2026 00:32:32 +0000 (0:00:07.918) 0:04:06.599 ********* 2026-03-10 00:33:41.161382 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:41.161387 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:41.161391 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:41.161396 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:41.161401 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:41.161405 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:41.161410 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:41.161414 | orchestrator | 2026-03-10 00:33:41.161420 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-10 00:33:41.161425 | orchestrator | Tuesday 10 March 2026 00:32:33 +0000 (0:00:01.404) 0:04:08.004 ********* 2026-03-10 00:33:41.161429 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:41.161434 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:41.161438 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:41.161443 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:41.161447 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:41.161452 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:41.161456 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:41.161461 | orchestrator | 2026-03-10 00:33:41.161465 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-10 00:33:41.161470 | orchestrator | Tuesday 10 March 2026 00:32:34 +0000 (0:00:01.011) 0:04:09.016 ********* 2026-03-10 00:33:41.161475 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:41.161479 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:41.161484 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:41.161488 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:41.161493 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:41.161497 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:41.161502 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:41.161506 | orchestrator | 2026-03-10 00:33:41.161511 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-10 00:33:41.161517 | orchestrator | Tuesday 10 March 2026 00:32:35 +0000 (0:00:00.367) 0:04:09.383 ********* 2026-03-10 00:33:41.161522 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:41.161526 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:41.161531 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:41.161535 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:41.161540 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:41.161544 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:41.161549 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:41.161553 | orchestrator | 2026-03-10 00:33:41.161558 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-10 00:33:41.161563 | orchestrator | Tuesday 10 March 2026 00:32:35 +0000 (0:00:00.297) 0:04:09.680 ********* 2026-03-10 00:33:41.161567 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:41.161572 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:41.161576 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:41.161581 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:41.161585 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:41.161590 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:41.161595 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:41.161599 | orchestrator | 2026-03-10 00:33:41.161604 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-10 00:33:41.161609 | orchestrator | Tuesday 10 March 2026 00:32:35 +0000 (0:00:00.365) 0:04:10.046 ********* 2026-03-10 00:33:41.161614 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:41.161619 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:41.161624 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:41.161632 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:41.161637 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:41.161641 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:41.161646 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:41.161650 | orchestrator | 2026-03-10 00:33:41.161655 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-10 00:33:41.161660 | orchestrator | Tuesday 10 March 2026 00:32:41 +0000 (0:00:05.711) 0:04:15.757 ********* 2026-03-10 00:33:41.161666 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:33:41.161673 | orchestrator | 2026-03-10 00:33:41.161678 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-10 00:33:41.161683 | orchestrator | Tuesday 10 March 2026 00:32:42 +0000 (0:00:00.449) 0:04:16.207 ********* 2026-03-10 00:33:41.161688 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-10 00:33:41.161692 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-10 00:33:41.161697 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-10 00:33:41.161702 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-10 00:33:41.161707 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:41.161729 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-10 00:33:41.161733 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:41.161738 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-10 00:33:41.161743 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-10 00:33:41.161747 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-10 00:33:41.161752 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:41.161756 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-10 00:33:41.161761 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-10 00:33:41.161766 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:41.161770 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-10 00:33:41.161775 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:41.161791 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-10 00:33:41.161796 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:41.161800 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-10 00:33:41.161805 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-10 00:33:41.161810 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:41.161814 | orchestrator | 2026-03-10 00:33:41.161819 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-10 00:33:41.161823 | orchestrator | Tuesday 10 March 2026 00:32:42 +0000 (0:00:00.343) 0:04:16.551 ********* 2026-03-10 00:33:41.161828 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:33:41.161833 | orchestrator | 2026-03-10 00:33:41.161838 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-10 00:33:41.161842 | orchestrator | Tuesday 10 March 2026 00:32:42 +0000 (0:00:00.451) 0:04:17.003 ********* 2026-03-10 00:33:41.161847 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-10 00:33:41.161852 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-10 00:33:41.161857 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:41.161861 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:41.161866 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-10 00:33:41.161870 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-10 00:33:41.161879 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:41.161884 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-10 00:33:41.161888 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:41.161906 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:41.161911 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-10 00:33:41.161916 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:41.161920 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-10 00:33:41.161925 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:41.161929 | orchestrator | 2026-03-10 00:33:41.161934 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-10 00:33:41.161938 | orchestrator | Tuesday 10 March 2026 00:32:43 +0000 (0:00:00.367) 0:04:17.370 ********* 2026-03-10 00:33:41.161943 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:33:41.161948 | orchestrator | 2026-03-10 00:33:41.161952 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-10 00:33:41.161959 | orchestrator | Tuesday 10 March 2026 00:32:43 +0000 (0:00:00.428) 0:04:17.798 ********* 2026-03-10 00:33:41.161964 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:41.161969 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:41.161973 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:41.161978 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:41.161983 | orchestrator | changed: [testbed-manager] 2026-03-10 00:33:41.161987 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:41.161992 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:41.161996 | orchestrator | 2026-03-10 00:33:41.162001 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-10 00:33:41.162005 | orchestrator | Tuesday 10 March 2026 00:33:18 +0000 (0:00:34.432) 0:04:52.231 ********* 2026-03-10 00:33:41.162010 | orchestrator | changed: [testbed-manager] 2026-03-10 00:33:41.162053 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:41.162058 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:41.162062 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:41.162067 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:41.162072 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:41.162076 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:41.162081 | orchestrator | 2026-03-10 00:33:41.162086 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-10 00:33:41.162090 | orchestrator | Tuesday 10 March 2026 00:33:25 +0000 (0:00:07.747) 0:04:59.978 ********* 2026-03-10 00:33:41.162095 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:41.162100 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:41.162104 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:41.162109 | orchestrator | changed: [testbed-manager] 2026-03-10 00:33:41.162113 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:41.162118 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:41.162123 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:41.162127 | orchestrator | 2026-03-10 00:33:41.162132 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-10 00:33:41.162136 | orchestrator | Tuesday 10 March 2026 00:33:33 +0000 (0:00:07.839) 0:05:07.817 ********* 2026-03-10 00:33:41.162141 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:41.162146 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:41.162150 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:41.162155 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:41.162159 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:41.162164 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:41.162169 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:41.162173 | orchestrator | 2026-03-10 00:33:41.162178 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-10 00:33:41.162186 | orchestrator | Tuesday 10 March 2026 00:33:35 +0000 (0:00:01.606) 0:05:09.424 ********* 2026-03-10 00:33:41.162191 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:41.162196 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:41.162200 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:41.162205 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:41.162209 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:41.162214 | orchestrator | changed: [testbed-manager] 2026-03-10 00:33:41.162219 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:41.162223 | orchestrator | 2026-03-10 00:33:41.162231 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-10 00:33:53.725169 | orchestrator | Tuesday 10 March 2026 00:33:41 +0000 (0:00:05.819) 0:05:15.244 ********* 2026-03-10 00:33:53.725284 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:33:53.725309 | orchestrator | 2026-03-10 00:33:53.725327 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-10 00:33:53.725343 | orchestrator | Tuesday 10 March 2026 00:33:41 +0000 (0:00:00.425) 0:05:15.669 ********* 2026-03-10 00:33:53.725357 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:53.725373 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:53.725388 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:53.725404 | orchestrator | changed: [testbed-manager] 2026-03-10 00:33:53.725420 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:53.725437 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:53.725454 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:53.725471 | orchestrator | 2026-03-10 00:33:53.725481 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-10 00:33:53.725491 | orchestrator | Tuesday 10 March 2026 00:33:42 +0000 (0:00:00.765) 0:05:16.435 ********* 2026-03-10 00:33:53.725501 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:53.725511 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:53.725521 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:53.725531 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:53.725540 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:53.725549 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:53.725559 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:53.725569 | orchestrator | 2026-03-10 00:33:53.725578 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-10 00:33:53.725588 | orchestrator | Tuesday 10 March 2026 00:33:44 +0000 (0:00:01.880) 0:05:18.315 ********* 2026-03-10 00:33:53.725598 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:33:53.725607 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:33:53.725617 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:33:53.725627 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:33:53.725636 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:33:53.725646 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:33:53.725655 | orchestrator | changed: [testbed-manager] 2026-03-10 00:33:53.725664 | orchestrator | 2026-03-10 00:33:53.725674 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-10 00:33:53.725684 | orchestrator | Tuesday 10 March 2026 00:33:46 +0000 (0:00:01.851) 0:05:20.166 ********* 2026-03-10 00:33:53.725693 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:53.725703 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:53.725769 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:53.725781 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:53.725792 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:53.725803 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:53.725814 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:53.725826 | orchestrator | 2026-03-10 00:33:53.725837 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-10 00:33:53.725892 | orchestrator | Tuesday 10 March 2026 00:33:46 +0000 (0:00:00.311) 0:05:20.477 ********* 2026-03-10 00:33:53.725904 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:53.725914 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:53.725925 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:53.725936 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:53.725947 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:53.725958 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:53.725969 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:53.725980 | orchestrator | 2026-03-10 00:33:53.725991 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-10 00:33:53.726002 | orchestrator | Tuesday 10 March 2026 00:33:46 +0000 (0:00:00.433) 0:05:20.911 ********* 2026-03-10 00:33:53.726013 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:53.726076 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:53.726088 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:53.726098 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:53.726108 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:53.726117 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:53.726127 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:53.726136 | orchestrator | 2026-03-10 00:33:53.726146 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-10 00:33:53.726155 | orchestrator | Tuesday 10 March 2026 00:33:47 +0000 (0:00:00.337) 0:05:21.249 ********* 2026-03-10 00:33:53.726165 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:53.726174 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:53.726184 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:53.726193 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:53.726202 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:53.726212 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:53.726221 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:53.726230 | orchestrator | 2026-03-10 00:33:53.726240 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-10 00:33:53.726251 | orchestrator | Tuesday 10 March 2026 00:33:47 +0000 (0:00:00.294) 0:05:21.544 ********* 2026-03-10 00:33:53.726260 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:53.726270 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:53.726279 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:53.726289 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:53.726298 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:53.726307 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:53.726316 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:53.726326 | orchestrator | 2026-03-10 00:33:53.726335 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-10 00:33:53.726345 | orchestrator | Tuesday 10 March 2026 00:33:47 +0000 (0:00:00.351) 0:05:21.895 ********* 2026-03-10 00:33:53.726354 | orchestrator | ok: [testbed-node-3] =>  2026-03-10 00:33:53.726364 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:33:53.726374 | orchestrator | ok: [testbed-node-4] =>  2026-03-10 00:33:53.726383 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:33:53.726392 | orchestrator | ok: [testbed-node-5] =>  2026-03-10 00:33:53.726402 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:33:53.726412 | orchestrator | ok: [testbed-manager] =>  2026-03-10 00:33:53.726424 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:33:53.726462 | orchestrator | ok: [testbed-node-0] =>  2026-03-10 00:33:53.726479 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:33:53.726494 | orchestrator | ok: [testbed-node-1] =>  2026-03-10 00:33:53.726508 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:33:53.726523 | orchestrator | ok: [testbed-node-2] =>  2026-03-10 00:33:53.726537 | orchestrator |  docker_version: 5:27.5.1 2026-03-10 00:33:53.726552 | orchestrator | 2026-03-10 00:33:53.726566 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-10 00:33:53.726580 | orchestrator | Tuesday 10 March 2026 00:33:48 +0000 (0:00:00.360) 0:05:22.256 ********* 2026-03-10 00:33:53.726606 | orchestrator | ok: [testbed-node-3] =>  2026-03-10 00:33:53.726621 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:33:53.726636 | orchestrator | ok: [testbed-node-4] =>  2026-03-10 00:33:53.726650 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:33:53.726665 | orchestrator | ok: [testbed-node-5] =>  2026-03-10 00:33:53.726680 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:33:53.726695 | orchestrator | ok: [testbed-manager] =>  2026-03-10 00:33:53.726782 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:33:53.726799 | orchestrator | ok: [testbed-node-0] =>  2026-03-10 00:33:53.726813 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:33:53.726830 | orchestrator | ok: [testbed-node-1] =>  2026-03-10 00:33:53.726847 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:33:53.726863 | orchestrator | ok: [testbed-node-2] =>  2026-03-10 00:33:53.726880 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-10 00:33:53.726896 | orchestrator | 2026-03-10 00:33:53.726910 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-10 00:33:53.726926 | orchestrator | Tuesday 10 March 2026 00:33:48 +0000 (0:00:00.300) 0:05:22.557 ********* 2026-03-10 00:33:53.726942 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:53.726957 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:53.726973 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:53.726990 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:53.727006 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:53.727022 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:53.727039 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:53.727055 | orchestrator | 2026-03-10 00:33:53.727072 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-10 00:33:53.727082 | orchestrator | Tuesday 10 March 2026 00:33:48 +0000 (0:00:00.291) 0:05:22.849 ********* 2026-03-10 00:33:53.727092 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:53.727102 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:53.727111 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:53.727121 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:33:53.727130 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:33:53.727140 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:33:53.727149 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:33:53.727159 | orchestrator | 2026-03-10 00:33:53.727168 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-10 00:33:53.727176 | orchestrator | Tuesday 10 March 2026 00:33:49 +0000 (0:00:00.291) 0:05:23.140 ********* 2026-03-10 00:33:53.727194 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:33:53.727204 | orchestrator | 2026-03-10 00:33:53.727212 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-10 00:33:53.727220 | orchestrator | Tuesday 10 March 2026 00:33:49 +0000 (0:00:00.577) 0:05:23.718 ********* 2026-03-10 00:33:53.727228 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:53.727236 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:53.727244 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:53.727252 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:53.727259 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:53.727267 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:53.727275 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:53.727283 | orchestrator | 2026-03-10 00:33:53.727291 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-10 00:33:53.727299 | orchestrator | Tuesday 10 March 2026 00:33:50 +0000 (0:00:00.799) 0:05:24.517 ********* 2026-03-10 00:33:53.727307 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:33:53.727314 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:33:53.727322 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:33:53.727330 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:33:53.727346 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:33:53.727353 | orchestrator | ok: [testbed-manager] 2026-03-10 00:33:53.727361 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:33:53.727369 | orchestrator | 2026-03-10 00:33:53.727377 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-10 00:33:53.727386 | orchestrator | Tuesday 10 March 2026 00:33:53 +0000 (0:00:02.898) 0:05:27.416 ********* 2026-03-10 00:33:53.727394 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-10 00:33:53.727403 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-10 00:33:53.727411 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-10 00:33:53.727418 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-10 00:33:53.727426 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-10 00:33:53.727434 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:33:53.727442 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-10 00:33:53.727450 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-10 00:33:53.727458 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-10 00:33:53.727465 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-10 00:33:53.727473 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:33:53.727481 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-10 00:33:53.727489 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-10 00:33:53.727497 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:33:53.727504 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-10 00:33:53.727512 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-10 00:33:53.727530 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-10 00:34:52.737098 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:34:52.737201 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-10 00:34:52.737214 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-10 00:34:52.737223 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-10 00:34:52.737232 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-10 00:34:52.737241 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:34:52.737250 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:34:52.737259 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-10 00:34:52.737268 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-10 00:34:52.737276 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-10 00:34:52.737285 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:34:52.737294 | orchestrator | 2026-03-10 00:34:52.737303 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-10 00:34:52.737313 | orchestrator | Tuesday 10 March 2026 00:33:53 +0000 (0:00:00.613) 0:05:28.030 ********* 2026-03-10 00:34:52.737322 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:52.737331 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:52.737339 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:52.737348 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:52.737363 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:52.737377 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:52.737392 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:52.737406 | orchestrator | 2026-03-10 00:34:52.737420 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-10 00:34:52.737436 | orchestrator | Tuesday 10 March 2026 00:34:00 +0000 (0:00:06.609) 0:05:34.639 ********* 2026-03-10 00:34:52.737450 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:52.737466 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:52.737481 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:52.737495 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:52.737510 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:52.737549 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:52.737559 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:52.737567 | orchestrator | 2026-03-10 00:34:52.737576 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-10 00:34:52.737585 | orchestrator | Tuesday 10 March 2026 00:34:01 +0000 (0:00:01.066) 0:05:35.706 ********* 2026-03-10 00:34:52.737593 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:52.737602 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:52.737610 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:52.737621 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:52.737630 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:52.737640 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:52.737649 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:52.737659 | orchestrator | 2026-03-10 00:34:52.737668 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-10 00:34:52.737679 | orchestrator | Tuesday 10 March 2026 00:34:09 +0000 (0:00:07.953) 0:05:43.660 ********* 2026-03-10 00:34:52.737709 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:52.737735 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:52.737745 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:52.737755 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:52.737765 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:52.737775 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:52.737785 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:52.737794 | orchestrator | 2026-03-10 00:34:52.737802 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-10 00:34:52.737811 | orchestrator | Tuesday 10 March 2026 00:34:12 +0000 (0:00:03.398) 0:05:47.058 ********* 2026-03-10 00:34:52.737820 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:52.737828 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:52.737836 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:52.737845 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:52.737853 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:52.737862 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:52.737870 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:52.737879 | orchestrator | 2026-03-10 00:34:52.737887 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-10 00:34:52.737896 | orchestrator | Tuesday 10 March 2026 00:34:14 +0000 (0:00:01.355) 0:05:48.413 ********* 2026-03-10 00:34:52.737904 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:52.737913 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:52.737921 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:52.737930 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:52.737938 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:52.737947 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:52.737955 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:52.737963 | orchestrator | 2026-03-10 00:34:52.737972 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-10 00:34:52.737981 | orchestrator | Tuesday 10 March 2026 00:34:15 +0000 (0:00:01.526) 0:05:49.940 ********* 2026-03-10 00:34:52.737989 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:34:52.737998 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:34:52.738007 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:34:52.738061 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:34:52.738071 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:34:52.738079 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:34:52.738088 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:52.738097 | orchestrator | 2026-03-10 00:34:52.738105 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-10 00:34:52.738119 | orchestrator | Tuesday 10 March 2026 00:34:16 +0000 (0:00:00.840) 0:05:50.780 ********* 2026-03-10 00:34:52.738134 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:52.738150 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:52.738166 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:52.738192 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:52.738208 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:52.738224 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:52.738239 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:52.738256 | orchestrator | 2026-03-10 00:34:52.738272 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-10 00:34:52.738310 | orchestrator | Tuesday 10 March 2026 00:34:25 +0000 (0:00:09.167) 0:05:59.948 ********* 2026-03-10 00:34:52.738326 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:52.738342 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:52.738356 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:52.738369 | orchestrator | changed: [testbed-manager] 2026-03-10 00:34:52.738378 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:52.738387 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:52.738395 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:52.738404 | orchestrator | 2026-03-10 00:34:52.738412 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-10 00:34:52.738421 | orchestrator | Tuesday 10 March 2026 00:34:26 +0000 (0:00:00.891) 0:06:00.840 ********* 2026-03-10 00:34:52.738429 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:52.738438 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:52.738446 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:52.738455 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:52.738463 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:52.738472 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:52.738480 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:52.738488 | orchestrator | 2026-03-10 00:34:52.738497 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-10 00:34:52.738505 | orchestrator | Tuesday 10 March 2026 00:34:35 +0000 (0:00:08.948) 0:06:09.788 ********* 2026-03-10 00:34:52.738514 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:52.738522 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:52.738531 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:52.738539 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:52.738548 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:52.738556 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:52.738564 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:52.738573 | orchestrator | 2026-03-10 00:34:52.738581 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-10 00:34:52.738590 | orchestrator | Tuesday 10 March 2026 00:34:46 +0000 (0:00:10.495) 0:06:20.284 ********* 2026-03-10 00:34:52.738599 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-10 00:34:52.738607 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-10 00:34:52.738616 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-10 00:34:52.738624 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-10 00:34:52.738633 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-10 00:34:52.738641 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-10 00:34:52.738650 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-10 00:34:52.738658 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-10 00:34:52.738667 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-10 00:34:52.738675 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-10 00:34:52.738684 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-10 00:34:52.738713 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-10 00:34:52.738722 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-10 00:34:52.738730 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-10 00:34:52.738738 | orchestrator | 2026-03-10 00:34:52.738747 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-10 00:34:52.738756 | orchestrator | Tuesday 10 March 2026 00:34:47 +0000 (0:00:01.193) 0:06:21.477 ********* 2026-03-10 00:34:52.738773 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:34:52.738781 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:34:52.738790 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:34:52.738799 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:34:52.738807 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:34:52.738816 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:34:52.738824 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:34:52.738833 | orchestrator | 2026-03-10 00:34:52.738842 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-10 00:34:52.738850 | orchestrator | Tuesday 10 March 2026 00:34:47 +0000 (0:00:00.522) 0:06:22.000 ********* 2026-03-10 00:34:52.738859 | orchestrator | ok: [testbed-manager] 2026-03-10 00:34:52.738867 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:34:52.738876 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:34:52.738884 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:34:52.738893 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:34:52.738901 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:34:52.738910 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:34:52.738918 | orchestrator | 2026-03-10 00:34:52.738927 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-10 00:34:52.738937 | orchestrator | Tuesday 10 March 2026 00:34:51 +0000 (0:00:03.747) 0:06:25.748 ********* 2026-03-10 00:34:52.738946 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:34:52.738954 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:34:52.738963 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:34:52.738972 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:34:52.738980 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:34:52.738988 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:34:52.738997 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:34:52.739005 | orchestrator | 2026-03-10 00:34:52.739015 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-10 00:34:52.739024 | orchestrator | Tuesday 10 March 2026 00:34:52 +0000 (0:00:00.745) 0:06:26.493 ********* 2026-03-10 00:34:52.739032 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-10 00:34:52.739078 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-10 00:34:52.739088 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:34:52.739096 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-10 00:34:52.739105 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-10 00:34:52.739114 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:34:52.739122 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-10 00:34:52.739131 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-10 00:34:52.739140 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:34:52.739156 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-10 00:35:11.902751 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-10 00:35:11.902867 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:35:11.902883 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-10 00:35:11.902896 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-10 00:35:11.902907 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:35:11.902918 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-10 00:35:11.902930 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-10 00:35:11.902940 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:35:11.902951 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-10 00:35:11.902962 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-10 00:35:11.902973 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:35:11.902984 | orchestrator | 2026-03-10 00:35:11.902997 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-10 00:35:11.903031 | orchestrator | Tuesday 10 March 2026 00:34:53 +0000 (0:00:00.610) 0:06:27.104 ********* 2026-03-10 00:35:11.903042 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:35:11.903053 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:35:11.903064 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:35:11.903074 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:35:11.903085 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:35:11.903096 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:35:11.903106 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:35:11.903117 | orchestrator | 2026-03-10 00:35:11.903128 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-10 00:35:11.903140 | orchestrator | Tuesday 10 March 2026 00:34:53 +0000 (0:00:00.514) 0:06:27.619 ********* 2026-03-10 00:35:11.903150 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:35:11.903161 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:35:11.903171 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:35:11.903182 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:35:11.903192 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:35:11.903203 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:35:11.903215 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:35:11.903228 | orchestrator | 2026-03-10 00:35:11.903240 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-10 00:35:11.903253 | orchestrator | Tuesday 10 March 2026 00:34:54 +0000 (0:00:00.529) 0:06:28.148 ********* 2026-03-10 00:35:11.903265 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:35:11.903279 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:35:11.903292 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:35:11.903304 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:35:11.903316 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:35:11.903328 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:35:11.903340 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:35:11.903352 | orchestrator | 2026-03-10 00:35:11.903364 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-10 00:35:11.903392 | orchestrator | Tuesday 10 March 2026 00:34:54 +0000 (0:00:00.522) 0:06:28.671 ********* 2026-03-10 00:35:11.903405 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:11.903418 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:11.903430 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:11.903441 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:11.903454 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:11.903466 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:11.903478 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:11.903489 | orchestrator | 2026-03-10 00:35:11.903502 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-10 00:35:11.903514 | orchestrator | Tuesday 10 March 2026 00:34:56 +0000 (0:00:01.911) 0:06:30.582 ********* 2026-03-10 00:35:11.903527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:35:11.903543 | orchestrator | 2026-03-10 00:35:11.903554 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-10 00:35:11.903565 | orchestrator | Tuesday 10 March 2026 00:34:57 +0000 (0:00:00.867) 0:06:31.449 ********* 2026-03-10 00:35:11.903576 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:11.903586 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:11.903597 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:11.903608 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:11.903619 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:11.903629 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:11.903640 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:11.903651 | orchestrator | 2026-03-10 00:35:11.903662 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-10 00:35:11.903707 | orchestrator | Tuesday 10 March 2026 00:34:58 +0000 (0:00:00.819) 0:06:32.269 ********* 2026-03-10 00:35:11.903727 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:11.903746 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:11.903765 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:11.903783 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:11.903797 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:11.903808 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:11.903819 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:11.903829 | orchestrator | 2026-03-10 00:35:11.903840 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-10 00:35:11.903851 | orchestrator | Tuesday 10 March 2026 00:34:59 +0000 (0:00:00.838) 0:06:33.107 ********* 2026-03-10 00:35:11.903862 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:11.903872 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:11.903883 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:11.903893 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:11.903904 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:11.903914 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:11.903924 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:11.903935 | orchestrator | 2026-03-10 00:35:11.903946 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-10 00:35:11.903976 | orchestrator | Tuesday 10 March 2026 00:35:00 +0000 (0:00:01.547) 0:06:34.655 ********* 2026-03-10 00:35:11.903987 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:35:11.903998 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:11.904009 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:11.904020 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:11.904030 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:11.904041 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:11.904051 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:11.904062 | orchestrator | 2026-03-10 00:35:11.904072 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-10 00:35:11.904083 | orchestrator | Tuesday 10 March 2026 00:35:02 +0000 (0:00:01.441) 0:06:36.096 ********* 2026-03-10 00:35:11.904094 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:11.904105 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:11.904115 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:11.904126 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:11.904136 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:11.904147 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:11.904158 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:11.904168 | orchestrator | 2026-03-10 00:35:11.904179 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-10 00:35:11.904189 | orchestrator | Tuesday 10 March 2026 00:35:03 +0000 (0:00:01.313) 0:06:37.410 ********* 2026-03-10 00:35:11.904200 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:11.904211 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:11.904221 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:11.904232 | orchestrator | changed: [testbed-manager] 2026-03-10 00:35:11.904242 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:11.904253 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:11.904263 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:11.904274 | orchestrator | 2026-03-10 00:35:11.904285 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-10 00:35:11.904295 | orchestrator | Tuesday 10 March 2026 00:35:04 +0000 (0:00:01.420) 0:06:38.830 ********* 2026-03-10 00:35:11.904306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:35:11.904317 | orchestrator | 2026-03-10 00:35:11.904328 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-10 00:35:11.904355 | orchestrator | Tuesday 10 March 2026 00:35:05 +0000 (0:00:01.117) 0:06:39.947 ********* 2026-03-10 00:35:11.904366 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:11.904377 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:11.904388 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:11.904398 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:11.904409 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:11.904420 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:11.904430 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:11.904441 | orchestrator | 2026-03-10 00:35:11.904452 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-10 00:35:11.904463 | orchestrator | Tuesday 10 March 2026 00:35:07 +0000 (0:00:01.303) 0:06:41.251 ********* 2026-03-10 00:35:11.904473 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:11.904484 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:11.904495 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:11.904505 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:11.904516 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:11.904526 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:11.904537 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:11.904547 | orchestrator | 2026-03-10 00:35:11.904558 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-10 00:35:11.904569 | orchestrator | Tuesday 10 March 2026 00:35:08 +0000 (0:00:01.156) 0:06:42.408 ********* 2026-03-10 00:35:11.904579 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:11.904590 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:11.904600 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:11.904611 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:11.904621 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:11.904632 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:11.904642 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:11.904653 | orchestrator | 2026-03-10 00:35:11.904664 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-10 00:35:11.904675 | orchestrator | Tuesday 10 March 2026 00:35:09 +0000 (0:00:01.162) 0:06:43.570 ********* 2026-03-10 00:35:11.904709 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:11.904721 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:11.904732 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:11.904743 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:11.904754 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:11.904764 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:11.904775 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:11.904786 | orchestrator | 2026-03-10 00:35:11.904797 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-10 00:35:11.904808 | orchestrator | Tuesday 10 March 2026 00:35:10 +0000 (0:00:01.303) 0:06:44.874 ********* 2026-03-10 00:35:11.904819 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:35:11.904830 | orchestrator | 2026-03-10 00:35:11.904841 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:35:11.904852 | orchestrator | Tuesday 10 March 2026 00:35:11 +0000 (0:00:00.972) 0:06:45.846 ********* 2026-03-10 00:35:11.904863 | orchestrator | 2026-03-10 00:35:11.904874 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:35:11.904884 | orchestrator | Tuesday 10 March 2026 00:35:11 +0000 (0:00:00.040) 0:06:45.887 ********* 2026-03-10 00:35:11.904895 | orchestrator | 2026-03-10 00:35:11.904906 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:35:11.904917 | orchestrator | Tuesday 10 March 2026 00:35:11 +0000 (0:00:00.046) 0:06:45.933 ********* 2026-03-10 00:35:11.904928 | orchestrator | 2026-03-10 00:35:11.904938 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:35:11.904956 | orchestrator | Tuesday 10 March 2026 00:35:11 +0000 (0:00:00.050) 0:06:45.983 ********* 2026-03-10 00:35:38.095335 | orchestrator | 2026-03-10 00:35:38.095440 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:35:38.095451 | orchestrator | Tuesday 10 March 2026 00:35:11 +0000 (0:00:00.042) 0:06:46.026 ********* 2026-03-10 00:35:38.095458 | orchestrator | 2026-03-10 00:35:38.095465 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:35:38.095471 | orchestrator | Tuesday 10 March 2026 00:35:11 +0000 (0:00:00.049) 0:06:46.075 ********* 2026-03-10 00:35:38.095477 | orchestrator | 2026-03-10 00:35:38.095484 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-10 00:35:38.095490 | orchestrator | Tuesday 10 March 2026 00:35:12 +0000 (0:00:00.048) 0:06:46.124 ********* 2026-03-10 00:35:38.095496 | orchestrator | 2026-03-10 00:35:38.095503 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-10 00:35:38.095509 | orchestrator | Tuesday 10 March 2026 00:35:12 +0000 (0:00:00.052) 0:06:46.176 ********* 2026-03-10 00:35:38.095515 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:38.095523 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:38.095529 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:38.095535 | orchestrator | 2026-03-10 00:35:38.095541 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-10 00:35:38.095547 | orchestrator | Tuesday 10 March 2026 00:35:13 +0000 (0:00:01.245) 0:06:47.422 ********* 2026-03-10 00:35:38.095554 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:38.095561 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:38.095567 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:38.095574 | orchestrator | changed: [testbed-manager] 2026-03-10 00:35:38.095580 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:38.095586 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:38.095592 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:38.095599 | orchestrator | 2026-03-10 00:35:38.095605 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-10 00:35:38.095611 | orchestrator | Tuesday 10 March 2026 00:35:14 +0000 (0:00:01.548) 0:06:48.970 ********* 2026-03-10 00:35:38.095617 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:38.095623 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:38.095629 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:38.095635 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:38.095641 | orchestrator | changed: [testbed-manager] 2026-03-10 00:35:38.095647 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:38.095653 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:38.095659 | orchestrator | 2026-03-10 00:35:38.095666 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-10 00:35:38.095700 | orchestrator | Tuesday 10 March 2026 00:35:16 +0000 (0:00:01.212) 0:06:50.183 ********* 2026-03-10 00:35:38.095706 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:35:38.095712 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:38.095718 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:38.095724 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:38.095730 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:38.095736 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:38.095742 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:38.095749 | orchestrator | 2026-03-10 00:35:38.095768 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-10 00:35:38.095775 | orchestrator | Tuesday 10 March 2026 00:35:18 +0000 (0:00:02.512) 0:06:52.695 ********* 2026-03-10 00:35:38.095781 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:35:38.095787 | orchestrator | 2026-03-10 00:35:38.095793 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-10 00:35:38.095799 | orchestrator | Tuesday 10 March 2026 00:35:18 +0000 (0:00:00.089) 0:06:52.785 ********* 2026-03-10 00:35:38.095806 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:38.095812 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:38.095818 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:38.095825 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:38.095837 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:38.095843 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:35:38.095850 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:38.095856 | orchestrator | 2026-03-10 00:35:38.095862 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-10 00:35:38.095869 | orchestrator | Tuesday 10 March 2026 00:35:19 +0000 (0:00:01.055) 0:06:53.841 ********* 2026-03-10 00:35:38.095876 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:35:38.095882 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:35:38.095888 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:35:38.095894 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:35:38.095901 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:35:38.095909 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:35:38.095916 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:35:38.095923 | orchestrator | 2026-03-10 00:35:38.095931 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-10 00:35:38.095938 | orchestrator | Tuesday 10 March 2026 00:35:20 +0000 (0:00:00.760) 0:06:54.601 ********* 2026-03-10 00:35:38.095946 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:35:38.095955 | orchestrator | 2026-03-10 00:35:38.095963 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-10 00:35:38.095970 | orchestrator | Tuesday 10 March 2026 00:35:21 +0000 (0:00:00.970) 0:06:55.572 ********* 2026-03-10 00:35:38.095977 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:38.095984 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:38.095991 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:38.095999 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:38.096006 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:38.096013 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:38.096020 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:38.096026 | orchestrator | 2026-03-10 00:35:38.096032 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-10 00:35:38.096038 | orchestrator | Tuesday 10 March 2026 00:35:22 +0000 (0:00:00.824) 0:06:56.396 ********* 2026-03-10 00:35:38.096045 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-10 00:35:38.096064 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-10 00:35:38.096071 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-10 00:35:38.096078 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-10 00:35:38.096084 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-10 00:35:38.096090 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-10 00:35:38.096096 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-10 00:35:38.096103 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-10 00:35:38.096109 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-10 00:35:38.096115 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-10 00:35:38.096121 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-10 00:35:38.096127 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-10 00:35:38.096134 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-10 00:35:38.096140 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-10 00:35:38.096146 | orchestrator | 2026-03-10 00:35:38.096152 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-10 00:35:38.096158 | orchestrator | Tuesday 10 March 2026 00:35:25 +0000 (0:00:02.747) 0:06:59.144 ********* 2026-03-10 00:35:38.096164 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:35:38.096171 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:35:38.096177 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:35:38.096188 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:35:38.096194 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:35:38.096200 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:35:38.096206 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:35:38.096212 | orchestrator | 2026-03-10 00:35:38.096218 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-10 00:35:38.096225 | orchestrator | Tuesday 10 March 2026 00:35:25 +0000 (0:00:00.507) 0:06:59.652 ********* 2026-03-10 00:35:38.096233 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:35:38.096241 | orchestrator | 2026-03-10 00:35:38.096247 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-10 00:35:38.096253 | orchestrator | Tuesday 10 March 2026 00:35:26 +0000 (0:00:00.819) 0:07:00.471 ********* 2026-03-10 00:35:38.096259 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:38.096265 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:38.096272 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:38.096278 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:38.096284 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:38.096290 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:38.096300 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:38.096306 | orchestrator | 2026-03-10 00:35:38.096312 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-10 00:35:38.096319 | orchestrator | Tuesday 10 March 2026 00:35:27 +0000 (0:00:00.844) 0:07:01.315 ********* 2026-03-10 00:35:38.096325 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:38.096331 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:38.096337 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:38.096343 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:38.096349 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:38.096355 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:38.096361 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:38.096367 | orchestrator | 2026-03-10 00:35:38.096374 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-10 00:35:38.096380 | orchestrator | Tuesday 10 March 2026 00:35:28 +0000 (0:00:01.051) 0:07:02.367 ********* 2026-03-10 00:35:38.096386 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:35:38.096392 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:35:38.096402 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:35:38.096412 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:35:38.096423 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:35:38.096433 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:35:38.096443 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:35:38.096453 | orchestrator | 2026-03-10 00:35:38.096464 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-10 00:35:38.096473 | orchestrator | Tuesday 10 March 2026 00:35:28 +0000 (0:00:00.523) 0:07:02.890 ********* 2026-03-10 00:35:38.096484 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:35:38.096493 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:35:38.096504 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:35:38.096515 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:38.096525 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:35:38.096609 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:35:38.096615 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:35:38.096621 | orchestrator | 2026-03-10 00:35:38.096628 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-10 00:35:38.096634 | orchestrator | Tuesday 10 March 2026 00:35:30 +0000 (0:00:01.393) 0:07:04.284 ********* 2026-03-10 00:35:38.096640 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:35:38.096646 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:35:38.096652 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:35:38.096659 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:35:38.096707 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:35:38.096714 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:35:38.096720 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:35:38.096726 | orchestrator | 2026-03-10 00:35:38.096733 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-10 00:35:38.096739 | orchestrator | Tuesday 10 March 2026 00:35:30 +0000 (0:00:00.522) 0:07:04.807 ********* 2026-03-10 00:35:38.096745 | orchestrator | ok: [testbed-manager] 2026-03-10 00:35:38.096751 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:35:38.096757 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:35:38.096764 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:35:38.096770 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:35:38.096776 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:35:38.096789 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:11.307421 | orchestrator | 2026-03-10 00:36:11.307554 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-10 00:36:11.307581 | orchestrator | Tuesday 10 March 2026 00:35:38 +0000 (0:00:07.421) 0:07:12.228 ********* 2026-03-10 00:36:11.307599 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:11.307617 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:11.307634 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:11.307702 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:11.307719 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:11.307736 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:11.307752 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:11.307769 | orchestrator | 2026-03-10 00:36:11.307785 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-10 00:36:11.307801 | orchestrator | Tuesday 10 March 2026 00:35:39 +0000 (0:00:01.580) 0:07:13.808 ********* 2026-03-10 00:36:11.307817 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:11.307835 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:11.307851 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:11.307867 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:11.307884 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:11.307900 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:11.307915 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:11.307931 | orchestrator | 2026-03-10 00:36:11.307948 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-10 00:36:11.307966 | orchestrator | Tuesday 10 March 2026 00:35:41 +0000 (0:00:01.714) 0:07:15.522 ********* 2026-03-10 00:36:11.307982 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:11.307999 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:11.308015 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:11.308031 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:11.308047 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:11.308064 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:11.308081 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:11.308097 | orchestrator | 2026-03-10 00:36:11.308114 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-10 00:36:11.308130 | orchestrator | Tuesday 10 March 2026 00:35:43 +0000 (0:00:01.724) 0:07:17.247 ********* 2026-03-10 00:36:11.308146 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:11.308162 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:11.308179 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:11.308195 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:11.308211 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:11.308226 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:11.308242 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:11.308257 | orchestrator | 2026-03-10 00:36:11.308273 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-10 00:36:11.308290 | orchestrator | Tuesday 10 March 2026 00:35:44 +0000 (0:00:01.108) 0:07:18.355 ********* 2026-03-10 00:36:11.308306 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:36:11.308323 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:36:11.308374 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:36:11.308392 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:36:11.308408 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:36:11.308424 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:36:11.308441 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:36:11.308456 | orchestrator | 2026-03-10 00:36:11.308473 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-10 00:36:11.308488 | orchestrator | Tuesday 10 March 2026 00:35:45 +0000 (0:00:00.856) 0:07:19.212 ********* 2026-03-10 00:36:11.308505 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:36:11.308522 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:36:11.308537 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:36:11.308553 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:36:11.308569 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:36:11.308584 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:36:11.308599 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:36:11.308615 | orchestrator | 2026-03-10 00:36:11.308632 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-10 00:36:11.308674 | orchestrator | Tuesday 10 March 2026 00:35:45 +0000 (0:00:00.541) 0:07:19.754 ********* 2026-03-10 00:36:11.308691 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:11.308708 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:11.308725 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:11.308743 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:11.308761 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:11.308777 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:11.308793 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:11.308810 | orchestrator | 2026-03-10 00:36:11.308827 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-10 00:36:11.308844 | orchestrator | Tuesday 10 March 2026 00:35:46 +0000 (0:00:00.519) 0:07:20.273 ********* 2026-03-10 00:36:11.308862 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:11.308878 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:11.308893 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:11.308909 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:11.308926 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:11.308942 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:11.308958 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:11.308975 | orchestrator | 2026-03-10 00:36:11.308992 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-10 00:36:11.309008 | orchestrator | Tuesday 10 March 2026 00:35:46 +0000 (0:00:00.730) 0:07:21.004 ********* 2026-03-10 00:36:11.309024 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:11.309041 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:11.309056 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:11.309073 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:11.309090 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:11.309106 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:11.309122 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:11.309137 | orchestrator | 2026-03-10 00:36:11.309155 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-10 00:36:11.309171 | orchestrator | Tuesday 10 March 2026 00:35:47 +0000 (0:00:00.550) 0:07:21.554 ********* 2026-03-10 00:36:11.309187 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:11.309205 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:11.309222 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:11.309238 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:11.309254 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:11.309270 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:11.309288 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:11.309306 | orchestrator | 2026-03-10 00:36:11.309348 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-10 00:36:11.309365 | orchestrator | Tuesday 10 March 2026 00:35:52 +0000 (0:00:05.466) 0:07:27.020 ********* 2026-03-10 00:36:11.309381 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:36:11.309431 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:36:11.309450 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:36:11.309467 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:36:11.309483 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:36:11.309500 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:36:11.309518 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:36:11.309537 | orchestrator | 2026-03-10 00:36:11.309554 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-10 00:36:11.309570 | orchestrator | Tuesday 10 March 2026 00:35:53 +0000 (0:00:00.554) 0:07:27.575 ********* 2026-03-10 00:36:11.309589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:36:11.309608 | orchestrator | 2026-03-10 00:36:11.309625 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-10 00:36:11.309664 | orchestrator | Tuesday 10 March 2026 00:35:54 +0000 (0:00:01.135) 0:07:28.710 ********* 2026-03-10 00:36:11.309681 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:11.309698 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:11.309714 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:11.309730 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:11.309746 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:11.309763 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:11.309778 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:11.309794 | orchestrator | 2026-03-10 00:36:11.309809 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-10 00:36:11.309826 | orchestrator | Tuesday 10 March 2026 00:35:56 +0000 (0:00:01.867) 0:07:30.578 ********* 2026-03-10 00:36:11.309841 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:11.309857 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:11.309874 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:11.309890 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:11.309905 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:11.309921 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:11.309937 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:11.309952 | orchestrator | 2026-03-10 00:36:11.309969 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-10 00:36:11.309985 | orchestrator | Tuesday 10 March 2026 00:35:57 +0000 (0:00:01.140) 0:07:31.718 ********* 2026-03-10 00:36:11.310002 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:11.310091 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:11.310115 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:11.310131 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:11.310148 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:11.310164 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:11.310179 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:11.310195 | orchestrator | 2026-03-10 00:36:11.310220 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-10 00:36:11.310237 | orchestrator | Tuesday 10 March 2026 00:35:58 +0000 (0:00:00.891) 0:07:32.610 ********* 2026-03-10 00:36:11.310254 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:36:11.310274 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:36:11.310292 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:36:11.310309 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:36:11.310326 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:36:11.310352 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:36:11.310369 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-10 00:36:11.310387 | orchestrator | 2026-03-10 00:36:11.310405 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-10 00:36:11.310421 | orchestrator | Tuesday 10 March 2026 00:36:00 +0000 (0:00:01.953) 0:07:34.563 ********* 2026-03-10 00:36:11.310439 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:36:11.310455 | orchestrator | 2026-03-10 00:36:11.310474 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-10 00:36:11.310491 | orchestrator | Tuesday 10 March 2026 00:36:01 +0000 (0:00:00.896) 0:07:35.459 ********* 2026-03-10 00:36:11.310508 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:11.310525 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:11.310543 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:11.310561 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:11.310579 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:11.310598 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:11.310615 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:11.310632 | orchestrator | 2026-03-10 00:36:11.310711 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-10 00:36:44.218724 | orchestrator | Tuesday 10 March 2026 00:36:11 +0000 (0:00:09.930) 0:07:45.390 ********* 2026-03-10 00:36:44.218853 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:44.218873 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:44.218885 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:44.218896 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:44.218907 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:44.218919 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:44.218930 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:44.218941 | orchestrator | 2026-03-10 00:36:44.218954 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-10 00:36:44.218965 | orchestrator | Tuesday 10 March 2026 00:36:14 +0000 (0:00:03.363) 0:07:48.753 ********* 2026-03-10 00:36:44.218976 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:44.218988 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:44.218999 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:44.219009 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:44.219020 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:44.219031 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:44.219042 | orchestrator | 2026-03-10 00:36:44.219053 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-10 00:36:44.219064 | orchestrator | Tuesday 10 March 2026 00:36:16 +0000 (0:00:01.347) 0:07:50.100 ********* 2026-03-10 00:36:44.219076 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:44.219088 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:44.219099 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:44.219110 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:44.219121 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:44.219132 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:44.219146 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:44.219158 | orchestrator | 2026-03-10 00:36:44.219170 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-10 00:36:44.219183 | orchestrator | 2026-03-10 00:36:44.219195 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-10 00:36:44.219208 | orchestrator | Tuesday 10 March 2026 00:36:17 +0000 (0:00:01.330) 0:07:51.431 ********* 2026-03-10 00:36:44.219221 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:36:44.219262 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:36:44.219274 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:36:44.219285 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:36:44.219295 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:36:44.219306 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:36:44.219317 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:36:44.219328 | orchestrator | 2026-03-10 00:36:44.219339 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-10 00:36:44.219350 | orchestrator | 2026-03-10 00:36:44.219361 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-10 00:36:44.219372 | orchestrator | Tuesday 10 March 2026 00:36:18 +0000 (0:00:00.794) 0:07:52.225 ********* 2026-03-10 00:36:44.219383 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:44.219394 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:44.219405 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:44.219416 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:44.219427 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:44.219453 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:44.219465 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:44.219475 | orchestrator | 2026-03-10 00:36:44.219486 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-10 00:36:44.219497 | orchestrator | Tuesday 10 March 2026 00:36:19 +0000 (0:00:01.366) 0:07:53.591 ********* 2026-03-10 00:36:44.219508 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:44.219519 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:44.219530 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:44.219541 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:44.219552 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:44.219563 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:44.219574 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:44.219585 | orchestrator | 2026-03-10 00:36:44.219596 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-10 00:36:44.219607 | orchestrator | Tuesday 10 March 2026 00:36:21 +0000 (0:00:01.519) 0:07:55.111 ********* 2026-03-10 00:36:44.219648 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:36:44.219668 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:36:44.219687 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:36:44.219705 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:36:44.219719 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:36:44.219730 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:36:44.219741 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:36:44.219752 | orchestrator | 2026-03-10 00:36:44.219763 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-10 00:36:44.219774 | orchestrator | Tuesday 10 March 2026 00:36:21 +0000 (0:00:00.702) 0:07:55.813 ********* 2026-03-10 00:36:44.219786 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:36:44.219798 | orchestrator | 2026-03-10 00:36:44.219809 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-10 00:36:44.219820 | orchestrator | Tuesday 10 March 2026 00:36:22 +0000 (0:00:00.893) 0:07:56.707 ********* 2026-03-10 00:36:44.219832 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:36:44.219846 | orchestrator | 2026-03-10 00:36:44.219857 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-10 00:36:44.219868 | orchestrator | Tuesday 10 March 2026 00:36:23 +0000 (0:00:00.837) 0:07:57.544 ********* 2026-03-10 00:36:44.219879 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:44.219890 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:44.219900 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:44.219911 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:44.219931 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:44.219942 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:44.219953 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:44.219963 | orchestrator | 2026-03-10 00:36:44.219994 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-10 00:36:44.220006 | orchestrator | Tuesday 10 March 2026 00:36:32 +0000 (0:00:09.030) 0:08:06.575 ********* 2026-03-10 00:36:44.220017 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:44.220027 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:44.220038 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:44.220049 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:44.220060 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:44.220071 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:44.220081 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:44.220092 | orchestrator | 2026-03-10 00:36:44.220103 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-10 00:36:44.220115 | orchestrator | Tuesday 10 March 2026 00:36:33 +0000 (0:00:00.855) 0:08:07.431 ********* 2026-03-10 00:36:44.220125 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:44.220136 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:44.220147 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:44.220158 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:44.220168 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:44.220179 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:44.220190 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:44.220201 | orchestrator | 2026-03-10 00:36:44.220212 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-10 00:36:44.220222 | orchestrator | Tuesday 10 March 2026 00:36:34 +0000 (0:00:01.459) 0:08:08.890 ********* 2026-03-10 00:36:44.220233 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:44.220244 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:44.220255 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:44.220265 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:44.220276 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:44.220287 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:44.220297 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:44.220308 | orchestrator | 2026-03-10 00:36:44.220319 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-10 00:36:44.220330 | orchestrator | Tuesday 10 March 2026 00:36:36 +0000 (0:00:01.957) 0:08:10.848 ********* 2026-03-10 00:36:44.220341 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:44.220352 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:44.220362 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:44.220373 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:44.220384 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:44.220401 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:44.220420 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:44.220438 | orchestrator | 2026-03-10 00:36:44.220457 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-10 00:36:44.220472 | orchestrator | Tuesday 10 March 2026 00:36:38 +0000 (0:00:01.251) 0:08:12.100 ********* 2026-03-10 00:36:44.220483 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:44.220494 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:44.220505 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:44.220516 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:44.220527 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:44.220544 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:44.220555 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:44.220566 | orchestrator | 2026-03-10 00:36:44.220576 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-10 00:36:44.220587 | orchestrator | 2026-03-10 00:36:44.220598 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-10 00:36:44.220608 | orchestrator | Tuesday 10 March 2026 00:36:39 +0000 (0:00:01.167) 0:08:13.267 ********* 2026-03-10 00:36:44.220646 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:36:44.220657 | orchestrator | 2026-03-10 00:36:44.220668 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-10 00:36:44.220678 | orchestrator | Tuesday 10 March 2026 00:36:40 +0000 (0:00:01.037) 0:08:14.304 ********* 2026-03-10 00:36:44.220689 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:44.220700 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:44.220711 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:44.220721 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:44.220732 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:44.220742 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:44.220753 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:44.220763 | orchestrator | 2026-03-10 00:36:44.220774 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-10 00:36:44.220785 | orchestrator | Tuesday 10 March 2026 00:36:41 +0000 (0:00:00.837) 0:08:15.141 ********* 2026-03-10 00:36:44.220796 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:44.220806 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:44.220817 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:44.220828 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:44.220838 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:44.220849 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:44.220860 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:44.220870 | orchestrator | 2026-03-10 00:36:44.220881 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-10 00:36:44.220892 | orchestrator | Tuesday 10 March 2026 00:36:42 +0000 (0:00:01.209) 0:08:16.351 ********* 2026-03-10 00:36:44.220903 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:36:44.220914 | orchestrator | 2026-03-10 00:36:44.220925 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-10 00:36:44.220935 | orchestrator | Tuesday 10 March 2026 00:36:43 +0000 (0:00:01.074) 0:08:17.425 ********* 2026-03-10 00:36:44.220946 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:36:44.220957 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:36:44.220968 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:36:44.220978 | orchestrator | ok: [testbed-manager] 2026-03-10 00:36:44.220989 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:36:44.220999 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:36:44.221010 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:36:44.221021 | orchestrator | 2026-03-10 00:36:44.221039 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-10 00:36:45.773927 | orchestrator | Tuesday 10 March 2026 00:36:44 +0000 (0:00:00.875) 0:08:18.301 ********* 2026-03-10 00:36:45.774079 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:36:45.774099 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:36:45.774111 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:36:45.774133 | orchestrator | changed: [testbed-manager] 2026-03-10 00:36:45.774145 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:36:45.774155 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:36:45.774166 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:36:45.774177 | orchestrator | 2026-03-10 00:36:45.774189 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:36:45.774202 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-10 00:36:45.774214 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-10 00:36:45.774225 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-10 00:36:45.774264 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-10 00:36:45.774275 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-10 00:36:45.774286 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-10 00:36:45.774297 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-10 00:36:45.774308 | orchestrator | 2026-03-10 00:36:45.774319 | orchestrator | 2026-03-10 00:36:45.774330 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:36:45.774341 | orchestrator | Tuesday 10 March 2026 00:36:45 +0000 (0:00:01.161) 0:08:19.462 ********* 2026-03-10 00:36:45.774352 | orchestrator | =============================================================================== 2026-03-10 00:36:45.774363 | orchestrator | osism.commons.packages : Install required packages --------------------- 87.03s 2026-03-10 00:36:45.774374 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.43s 2026-03-10 00:36:45.774400 | orchestrator | osism.commons.packages : Download required packages -------------------- 32.27s 2026-03-10 00:36:45.774411 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.58s 2026-03-10 00:36:45.774422 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 13.88s 2026-03-10 00:36:45.774433 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 13.78s 2026-03-10 00:36:45.774444 | orchestrator | osism.services.docker : Install docker package ------------------------- 10.50s 2026-03-10 00:36:45.774455 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.93s 2026-03-10 00:36:45.774466 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.17s 2026-03-10 00:36:45.774479 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 9.03s 2026-03-10 00:36:45.774491 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 8.95s 2026-03-10 00:36:45.774504 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.95s 2026-03-10 00:36:45.774517 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.92s 2026-03-10 00:36:45.774529 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.84s 2026-03-10 00:36:45.774541 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 7.75s 2026-03-10 00:36:45.774553 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.42s 2026-03-10 00:36:45.774566 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.61s 2026-03-10 00:36:45.774578 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 5.95s 2026-03-10 00:36:45.774590 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.82s 2026-03-10 00:36:45.774602 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.78s 2026-03-10 00:36:46.113598 | orchestrator | + osism apply fail2ban 2026-03-10 00:36:59.179152 | orchestrator | 2026-03-10 00:36:59 | INFO  | Prepare task for execution of fail2ban. 2026-03-10 00:36:59.255838 | orchestrator | 2026-03-10 00:36:59 | INFO  | Task 81b22c60-b077-40bc-84b1-09e0aeaf21d0 (fail2ban) was prepared for execution. 2026-03-10 00:36:59.255938 | orchestrator | 2026-03-10 00:36:59 | INFO  | It takes a moment until task 81b22c60-b077-40bc-84b1-09e0aeaf21d0 (fail2ban) has been started and output is visible here. 2026-03-10 00:37:22.669427 | orchestrator | 2026-03-10 00:37:22.669541 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-10 00:37:22.669650 | orchestrator | 2026-03-10 00:37:22.669665 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-10 00:37:22.669677 | orchestrator | Tuesday 10 March 2026 00:37:04 +0000 (0:00:00.264) 0:00:00.264 ********* 2026-03-10 00:37:22.669690 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:37:22.669704 | orchestrator | 2026-03-10 00:37:22.669716 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-10 00:37:22.669727 | orchestrator | Tuesday 10 March 2026 00:37:05 +0000 (0:00:01.156) 0:00:01.421 ********* 2026-03-10 00:37:22.669738 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:37:22.669750 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:37:22.669761 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:37:22.669772 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:37:22.669783 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:37:22.669794 | orchestrator | changed: [testbed-manager] 2026-03-10 00:37:22.669804 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:37:22.669815 | orchestrator | 2026-03-10 00:37:22.669826 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-10 00:37:22.669837 | orchestrator | Tuesday 10 March 2026 00:37:17 +0000 (0:00:12.198) 0:00:13.619 ********* 2026-03-10 00:37:22.669848 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:37:22.669859 | orchestrator | changed: [testbed-manager] 2026-03-10 00:37:22.669870 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:37:22.669880 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:37:22.669891 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:37:22.669902 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:37:22.669912 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:37:22.669923 | orchestrator | 2026-03-10 00:37:22.669934 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-10 00:37:22.669945 | orchestrator | Tuesday 10 March 2026 00:37:18 +0000 (0:00:01.483) 0:00:15.102 ********* 2026-03-10 00:37:22.669956 | orchestrator | ok: [testbed-manager] 2026-03-10 00:37:22.669970 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:37:22.669982 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:37:22.669994 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:37:22.670007 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:37:22.670086 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:37:22.670099 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:37:22.670112 | orchestrator | 2026-03-10 00:37:22.670124 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-10 00:37:22.670137 | orchestrator | Tuesday 10 March 2026 00:37:20 +0000 (0:00:01.621) 0:00:16.723 ********* 2026-03-10 00:37:22.670149 | orchestrator | changed: [testbed-manager] 2026-03-10 00:37:22.670163 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:37:22.670207 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:37:22.670221 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:37:22.670234 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:37:22.670246 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:37:22.670258 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:37:22.670270 | orchestrator | 2026-03-10 00:37:22.670283 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:37:22.670312 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:37:22.670327 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:37:22.670338 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:37:22.670349 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:37:22.670373 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:37:22.670384 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:37:22.670395 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:37:22.670405 | orchestrator | 2026-03-10 00:37:22.670417 | orchestrator | 2026-03-10 00:37:22.670427 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:37:22.670439 | orchestrator | Tuesday 10 March 2026 00:37:22 +0000 (0:00:01.785) 0:00:18.509 ********* 2026-03-10 00:37:22.670449 | orchestrator | =============================================================================== 2026-03-10 00:37:22.670460 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 12.20s 2026-03-10 00:37:22.670471 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.79s 2026-03-10 00:37:22.670482 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.62s 2026-03-10 00:37:22.670493 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.48s 2026-03-10 00:37:22.670503 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.16s 2026-03-10 00:37:23.006984 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-10 00:37:23.007092 | orchestrator | + osism apply network 2026-03-10 00:37:35.289798 | orchestrator | 2026-03-10 00:37:35 | INFO  | Prepare task for execution of network. 2026-03-10 00:37:35.366828 | orchestrator | 2026-03-10 00:37:35 | INFO  | Task 073d24cd-1b43-4ddd-a0f7-e3683a3e6bf3 (network) was prepared for execution. 2026-03-10 00:37:35.366949 | orchestrator | 2026-03-10 00:37:35 | INFO  | It takes a moment until task 073d24cd-1b43-4ddd-a0f7-e3683a3e6bf3 (network) has been started and output is visible here. 2026-03-10 00:38:04.815721 | orchestrator | 2026-03-10 00:38:04.815848 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-10 00:38:04.815869 | orchestrator | 2026-03-10 00:38:04.815884 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-10 00:38:04.815898 | orchestrator | Tuesday 10 March 2026 00:37:39 +0000 (0:00:00.265) 0:00:00.265 ********* 2026-03-10 00:38:04.815912 | orchestrator | ok: [testbed-manager] 2026-03-10 00:38:04.815926 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:38:04.815940 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:38:04.815954 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:38:04.815967 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:38:04.815981 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:38:04.815995 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:38:04.816009 | orchestrator | 2026-03-10 00:38:04.816024 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-10 00:38:04.816038 | orchestrator | Tuesday 10 March 2026 00:37:40 +0000 (0:00:00.701) 0:00:00.967 ********* 2026-03-10 00:38:04.816054 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:38:04.816070 | orchestrator | 2026-03-10 00:38:04.816085 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-10 00:38:04.816098 | orchestrator | Tuesday 10 March 2026 00:37:41 +0000 (0:00:01.224) 0:00:02.191 ********* 2026-03-10 00:38:04.816112 | orchestrator | ok: [testbed-manager] 2026-03-10 00:38:04.816125 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:38:04.816137 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:38:04.816149 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:38:04.816191 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:38:04.816205 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:38:04.816218 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:38:04.816232 | orchestrator | 2026-03-10 00:38:04.816244 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-10 00:38:04.816257 | orchestrator | Tuesday 10 March 2026 00:37:43 +0000 (0:00:01.955) 0:00:04.147 ********* 2026-03-10 00:38:04.816270 | orchestrator | ok: [testbed-manager] 2026-03-10 00:38:04.816283 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:38:04.816297 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:38:04.816310 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:38:04.816322 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:38:04.816335 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:38:04.816344 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:38:04.816352 | orchestrator | 2026-03-10 00:38:04.816360 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-10 00:38:04.816368 | orchestrator | Tuesday 10 March 2026 00:37:45 +0000 (0:00:01.711) 0:00:05.858 ********* 2026-03-10 00:38:04.816376 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-10 00:38:04.816385 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-10 00:38:04.816393 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-10 00:38:04.816401 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-10 00:38:04.816409 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-10 00:38:04.816417 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-10 00:38:04.816425 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-10 00:38:04.816432 | orchestrator | 2026-03-10 00:38:04.816441 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-10 00:38:04.816449 | orchestrator | Tuesday 10 March 2026 00:37:46 +0000 (0:00:00.971) 0:00:06.830 ********* 2026-03-10 00:38:04.816457 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-10 00:38:04.816466 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 00:38:04.816473 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-10 00:38:04.816481 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-10 00:38:04.816489 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 00:38:04.816497 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-10 00:38:04.816505 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-10 00:38:04.816513 | orchestrator | 2026-03-10 00:38:04.816520 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-10 00:38:04.816528 | orchestrator | Tuesday 10 March 2026 00:37:49 +0000 (0:00:03.575) 0:00:10.405 ********* 2026-03-10 00:38:04.816536 | orchestrator | changed: [testbed-manager] 2026-03-10 00:38:04.816544 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:38:04.816552 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:38:04.816605 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:38:04.816621 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:38:04.816634 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:38:04.816658 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:38:04.816667 | orchestrator | 2026-03-10 00:38:04.816675 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-10 00:38:04.816683 | orchestrator | Tuesday 10 March 2026 00:37:51 +0000 (0:00:01.642) 0:00:12.048 ********* 2026-03-10 00:38:04.816691 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 00:38:04.816699 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 00:38:04.816707 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-10 00:38:04.816732 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-10 00:38:04.816756 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-10 00:38:04.816768 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-10 00:38:04.816782 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-10 00:38:04.816795 | orchestrator | 2026-03-10 00:38:04.816806 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-10 00:38:04.816827 | orchestrator | Tuesday 10 March 2026 00:37:53 +0000 (0:00:01.835) 0:00:13.883 ********* 2026-03-10 00:38:04.816838 | orchestrator | ok: [testbed-manager] 2026-03-10 00:38:04.816849 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:38:04.816860 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:38:04.816871 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:38:04.816881 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:38:04.816888 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:38:04.816895 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:38:04.816901 | orchestrator | 2026-03-10 00:38:04.816908 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-10 00:38:04.816933 | orchestrator | Tuesday 10 March 2026 00:37:54 +0000 (0:00:01.165) 0:00:15.049 ********* 2026-03-10 00:38:04.816940 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:38:04.816947 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:38:04.816954 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:38:04.816960 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:38:04.816967 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:38:04.816974 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:38:04.816980 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:38:04.816987 | orchestrator | 2026-03-10 00:38:04.816994 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-10 00:38:04.817000 | orchestrator | Tuesday 10 March 2026 00:37:55 +0000 (0:00:00.672) 0:00:15.722 ********* 2026-03-10 00:38:04.817007 | orchestrator | ok: [testbed-manager] 2026-03-10 00:38:04.817014 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:38:04.817020 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:38:04.817027 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:38:04.817034 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:38:04.817040 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:38:04.817047 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:38:04.817053 | orchestrator | 2026-03-10 00:38:04.817060 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-10 00:38:04.817066 | orchestrator | Tuesday 10 March 2026 00:37:57 +0000 (0:00:02.144) 0:00:17.867 ********* 2026-03-10 00:38:04.817073 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:38:04.817080 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:38:04.817086 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:38:04.817093 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:38:04.817099 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:38:04.817106 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:38:04.817113 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-10 00:38:04.817121 | orchestrator | 2026-03-10 00:38:04.817128 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-10 00:38:04.817134 | orchestrator | Tuesday 10 March 2026 00:37:58 +0000 (0:00:01.046) 0:00:18.913 ********* 2026-03-10 00:38:04.817141 | orchestrator | ok: [testbed-manager] 2026-03-10 00:38:04.817148 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:38:04.817154 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:38:04.817161 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:38:04.817167 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:38:04.817174 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:38:04.817180 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:38:04.817187 | orchestrator | 2026-03-10 00:38:04.817193 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-10 00:38:04.817200 | orchestrator | Tuesday 10 March 2026 00:38:00 +0000 (0:00:01.653) 0:00:20.566 ********* 2026-03-10 00:38:04.817212 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:38:04.817221 | orchestrator | 2026-03-10 00:38:04.817227 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-10 00:38:04.817240 | orchestrator | Tuesday 10 March 2026 00:38:01 +0000 (0:00:01.327) 0:00:21.894 ********* 2026-03-10 00:38:04.817246 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:38:04.817253 | orchestrator | ok: [testbed-manager] 2026-03-10 00:38:04.817259 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:38:04.817266 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:38:04.817273 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:38:04.817279 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:38:04.817286 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:38:04.817292 | orchestrator | 2026-03-10 00:38:04.817299 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-10 00:38:04.817306 | orchestrator | Tuesday 10 March 2026 00:38:02 +0000 (0:00:01.376) 0:00:23.270 ********* 2026-03-10 00:38:04.817312 | orchestrator | ok: [testbed-manager] 2026-03-10 00:38:04.817319 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:38:04.817325 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:38:04.817332 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:38:04.817338 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:38:04.817345 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:38:04.817351 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:38:04.817358 | orchestrator | 2026-03-10 00:38:04.817365 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-10 00:38:04.817371 | orchestrator | Tuesday 10 March 2026 00:38:03 +0000 (0:00:00.686) 0:00:23.956 ********* 2026-03-10 00:38:04.817378 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:38:04.817385 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:38:04.817391 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:38:04.817398 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:38:04.817405 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:38:04.817411 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:38:04.817418 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:38:04.817424 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:38:04.817431 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:38:04.817438 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:38:04.817444 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:38:04.817451 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-10 00:38:04.817457 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:38:04.817464 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-10 00:38:04.817471 | orchestrator | 2026-03-10 00:38:04.817482 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-10 00:38:22.139980 | orchestrator | Tuesday 10 March 2026 00:38:04 +0000 (0:00:01.368) 0:00:25.325 ********* 2026-03-10 00:38:22.140085 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:38:22.140100 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:38:22.140111 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:38:22.140121 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:38:22.140131 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:38:22.140140 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:38:22.140150 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:38:22.140160 | orchestrator | 2026-03-10 00:38:22.140170 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-10 00:38:22.140180 | orchestrator | Tuesday 10 March 2026 00:38:05 +0000 (0:00:00.643) 0:00:25.968 ********* 2026-03-10 00:38:22.140192 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-manager, testbed-node-1, testbed-node-5, testbed-node-3, testbed-node-0, testbed-node-2, testbed-node-4 2026-03-10 00:38:22.140229 | orchestrator | 2026-03-10 00:38:22.140247 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-10 00:38:22.140263 | orchestrator | Tuesday 10 March 2026 00:38:10 +0000 (0:00:04.754) 0:00:30.722 ********* 2026-03-10 00:38:22.140280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140296 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:22.140376 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:22.140387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140397 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140407 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140417 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140426 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:22.140436 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:22.140464 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:22.140475 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:22.140493 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:22.140504 | orchestrator | 2026-03-10 00:38:22.140513 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-10 00:38:22.140524 | orchestrator | Tuesday 10 March 2026 00:38:16 +0000 (0:00:06.511) 0:00:37.233 ********* 2026-03-10 00:38:22.140537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140548 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140616 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140632 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140656 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-10 00:38:22.140667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:22.140678 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:22.140689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:22.140700 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:22.140711 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:22.140740 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:35.959380 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-10 00:38:35.959493 | orchestrator | 2026-03-10 00:38:35.959509 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-10 00:38:35.959522 | orchestrator | Tuesday 10 March 2026 00:38:22 +0000 (0:00:05.851) 0:00:43.085 ********* 2026-03-10 00:38:35.959536 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:38:35.959623 | orchestrator | 2026-03-10 00:38:35.959637 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-10 00:38:35.959648 | orchestrator | Tuesday 10 March 2026 00:38:23 +0000 (0:00:01.437) 0:00:44.522 ********* 2026-03-10 00:38:35.959660 | orchestrator | ok: [testbed-manager] 2026-03-10 00:38:35.959673 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:38:35.959684 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:38:35.959695 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:38:35.959706 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:38:35.959717 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:38:35.959727 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:38:35.959738 | orchestrator | 2026-03-10 00:38:35.959749 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-10 00:38:35.959761 | orchestrator | Tuesday 10 March 2026 00:38:24 +0000 (0:00:00.956) 0:00:45.479 ********* 2026-03-10 00:38:35.959772 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:38:35.959784 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:38:35.959795 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:38:35.959806 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:38:35.959817 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:38:35.959828 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:38:35.959858 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:38:35.959869 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:38:35.959880 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:38:35.959891 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:38:35.959902 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:38:35.959913 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:38:35.959924 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:38:35.959935 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:38:35.959946 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:38:35.959956 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:38:35.959967 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:38:35.959998 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:38:35.960009 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:38:35.960020 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:38:35.960031 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:38:35.960042 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:38:35.960053 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:38:35.960064 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:38:35.960075 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:38:35.960086 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:38:35.960097 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:38:35.960108 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:38:35.960118 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:38:35.960129 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:38:35.960140 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-10 00:38:35.960151 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-10 00:38:35.960162 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-10 00:38:35.960172 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-10 00:38:35.960183 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:38:35.960194 | orchestrator | 2026-03-10 00:38:35.960205 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-10 00:38:35.960235 | orchestrator | Tuesday 10 March 2026 00:38:25 +0000 (0:00:01.018) 0:00:46.498 ********* 2026-03-10 00:38:35.960247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:38:35.960258 | orchestrator | 2026-03-10 00:38:35.960269 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-10 00:38:35.960280 | orchestrator | Tuesday 10 March 2026 00:38:27 +0000 (0:00:01.297) 0:00:47.796 ********* 2026-03-10 00:38:35.960291 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:38:35.960302 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:38:35.960313 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:38:35.960324 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:38:35.960334 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:38:35.960345 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:38:35.960356 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:38:35.960367 | orchestrator | 2026-03-10 00:38:35.960377 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-10 00:38:35.960388 | orchestrator | Tuesday 10 March 2026 00:38:27 +0000 (0:00:00.653) 0:00:48.449 ********* 2026-03-10 00:38:35.960399 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:38:35.960410 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:38:35.960420 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:38:35.960431 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:38:35.960442 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:38:35.960453 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:38:35.960463 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:38:35.960474 | orchestrator | 2026-03-10 00:38:35.960485 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-10 00:38:35.960496 | orchestrator | Tuesday 10 March 2026 00:38:28 +0000 (0:00:00.845) 0:00:49.295 ********* 2026-03-10 00:38:35.960507 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:38:35.960524 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:38:35.960535 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:38:35.960546 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:38:35.960577 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:38:35.960587 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:38:35.960598 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:38:35.960609 | orchestrator | 2026-03-10 00:38:35.960620 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-10 00:38:35.960631 | orchestrator | Tuesday 10 March 2026 00:38:29 +0000 (0:00:00.673) 0:00:49.969 ********* 2026-03-10 00:38:35.960642 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:38:35.960652 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:38:35.960669 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:38:35.960680 | orchestrator | ok: [testbed-manager] 2026-03-10 00:38:35.960691 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:38:35.960701 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:38:35.960712 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:38:35.960723 | orchestrator | 2026-03-10 00:38:35.960734 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-10 00:38:35.960745 | orchestrator | Tuesday 10 March 2026 00:38:31 +0000 (0:00:01.686) 0:00:51.655 ********* 2026-03-10 00:38:35.960756 | orchestrator | ok: [testbed-manager] 2026-03-10 00:38:35.960766 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:38:35.960777 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:38:35.960788 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:38:35.960798 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:38:35.960809 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:38:35.960820 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:38:35.960830 | orchestrator | 2026-03-10 00:38:35.960841 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-10 00:38:35.960852 | orchestrator | Tuesday 10 March 2026 00:38:32 +0000 (0:00:01.020) 0:00:52.675 ********* 2026-03-10 00:38:35.960863 | orchestrator | ok: [testbed-manager] 2026-03-10 00:38:35.960873 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:38:35.960884 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:38:35.960895 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:38:35.960905 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:38:35.960916 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:38:35.960926 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:38:35.960937 | orchestrator | 2026-03-10 00:38:35.960948 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-10 00:38:35.960959 | orchestrator | Tuesday 10 March 2026 00:38:34 +0000 (0:00:02.396) 0:00:55.072 ********* 2026-03-10 00:38:35.960970 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:38:35.960981 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:38:35.960991 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:38:35.961002 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:38:35.961013 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:38:35.961024 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:38:35.961035 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:38:35.961045 | orchestrator | 2026-03-10 00:38:35.961056 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-10 00:38:35.961068 | orchestrator | Tuesday 10 March 2026 00:38:35 +0000 (0:00:00.861) 0:00:55.934 ********* 2026-03-10 00:38:35.961078 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:38:35.961089 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:38:35.961100 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:38:35.961110 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:38:35.961121 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:38:35.961132 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:38:35.961142 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:38:35.961153 | orchestrator | 2026-03-10 00:38:35.961164 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:38:35.961176 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-10 00:38:35.961195 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-10 00:38:35.961214 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-10 00:38:36.363370 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-10 00:38:36.363468 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-10 00:38:36.363480 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-10 00:38:36.363491 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-10 00:38:36.363501 | orchestrator | 2026-03-10 00:38:36.363511 | orchestrator | 2026-03-10 00:38:36.363521 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:38:36.363533 | orchestrator | Tuesday 10 March 2026 00:38:35 +0000 (0:00:00.556) 0:00:56.490 ********* 2026-03-10 00:38:36.363543 | orchestrator | =============================================================================== 2026-03-10 00:38:36.363612 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 6.51s 2026-03-10 00:38:36.363623 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.85s 2026-03-10 00:38:36.363632 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.75s 2026-03-10 00:38:36.363642 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.58s 2026-03-10 00:38:36.363652 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.40s 2026-03-10 00:38:36.363661 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.14s 2026-03-10 00:38:36.363671 | orchestrator | osism.commons.network : Install required packages ----------------------- 1.96s 2026-03-10 00:38:36.363680 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.84s 2026-03-10 00:38:36.363690 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.71s 2026-03-10 00:38:36.363700 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.69s 2026-03-10 00:38:36.363709 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.65s 2026-03-10 00:38:36.363719 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.64s 2026-03-10 00:38:36.363729 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.44s 2026-03-10 00:38:36.363738 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.38s 2026-03-10 00:38:36.363748 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.37s 2026-03-10 00:38:36.363758 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.33s 2026-03-10 00:38:36.363767 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.30s 2026-03-10 00:38:36.363777 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.22s 2026-03-10 00:38:36.363787 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.17s 2026-03-10 00:38:36.363796 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.05s 2026-03-10 00:38:36.699585 | orchestrator | + osism apply wireguard 2026-03-10 00:38:48.835617 | orchestrator | 2026-03-10 00:38:48 | INFO  | Prepare task for execution of wireguard. 2026-03-10 00:38:48.935581 | orchestrator | 2026-03-10 00:38:48 | INFO  | Task 38d51b7e-cb26-4180-a82b-086024be880c (wireguard) was prepared for execution. 2026-03-10 00:38:48.935686 | orchestrator | 2026-03-10 00:38:48 | INFO  | It takes a moment until task 38d51b7e-cb26-4180-a82b-086024be880c (wireguard) has been started and output is visible here. 2026-03-10 00:39:10.079527 | orchestrator | 2026-03-10 00:39:10.079715 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-10 00:39:10.079732 | orchestrator | 2026-03-10 00:39:10.079744 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-10 00:39:10.079756 | orchestrator | Tuesday 10 March 2026 00:38:53 +0000 (0:00:00.230) 0:00:00.230 ********* 2026-03-10 00:39:10.079767 | orchestrator | ok: [testbed-manager] 2026-03-10 00:39:10.079779 | orchestrator | 2026-03-10 00:39:10.079790 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-10 00:39:10.079801 | orchestrator | Tuesday 10 March 2026 00:38:55 +0000 (0:00:01.619) 0:00:01.849 ********* 2026-03-10 00:39:10.079812 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:10.079824 | orchestrator | 2026-03-10 00:39:10.079835 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-10 00:39:10.079846 | orchestrator | Tuesday 10 March 2026 00:39:01 +0000 (0:00:06.950) 0:00:08.799 ********* 2026-03-10 00:39:10.079857 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:10.079868 | orchestrator | 2026-03-10 00:39:10.079879 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-10 00:39:10.079890 | orchestrator | Tuesday 10 March 2026 00:39:02 +0000 (0:00:00.587) 0:00:09.387 ********* 2026-03-10 00:39:10.079901 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:10.079912 | orchestrator | 2026-03-10 00:39:10.079923 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-10 00:39:10.079934 | orchestrator | Tuesday 10 March 2026 00:39:02 +0000 (0:00:00.433) 0:00:09.820 ********* 2026-03-10 00:39:10.079944 | orchestrator | ok: [testbed-manager] 2026-03-10 00:39:10.079955 | orchestrator | 2026-03-10 00:39:10.079966 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-10 00:39:10.079977 | orchestrator | Tuesday 10 March 2026 00:39:03 +0000 (0:00:00.724) 0:00:10.545 ********* 2026-03-10 00:39:10.079988 | orchestrator | ok: [testbed-manager] 2026-03-10 00:39:10.079999 | orchestrator | 2026-03-10 00:39:10.080010 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-10 00:39:10.080021 | orchestrator | Tuesday 10 March 2026 00:39:04 +0000 (0:00:00.477) 0:00:11.022 ********* 2026-03-10 00:39:10.080032 | orchestrator | ok: [testbed-manager] 2026-03-10 00:39:10.080043 | orchestrator | 2026-03-10 00:39:10.080054 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-10 00:39:10.080065 | orchestrator | Tuesday 10 March 2026 00:39:04 +0000 (0:00:00.462) 0:00:11.485 ********* 2026-03-10 00:39:10.080076 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:10.080087 | orchestrator | 2026-03-10 00:39:10.080098 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-10 00:39:10.080109 | orchestrator | Tuesday 10 March 2026 00:39:05 +0000 (0:00:01.323) 0:00:12.808 ********* 2026-03-10 00:39:10.080120 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-10 00:39:10.080131 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:10.080142 | orchestrator | 2026-03-10 00:39:10.080153 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-10 00:39:10.080164 | orchestrator | Tuesday 10 March 2026 00:39:06 +0000 (0:00:00.954) 0:00:13.763 ********* 2026-03-10 00:39:10.080175 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:10.080186 | orchestrator | 2026-03-10 00:39:10.080197 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-10 00:39:10.080208 | orchestrator | Tuesday 10 March 2026 00:39:08 +0000 (0:00:01.823) 0:00:15.587 ********* 2026-03-10 00:39:10.080218 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:10.080229 | orchestrator | 2026-03-10 00:39:10.080240 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:39:10.080297 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:39:10.080310 | orchestrator | 2026-03-10 00:39:10.080321 | orchestrator | 2026-03-10 00:39:10.080332 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:39:10.080342 | orchestrator | Tuesday 10 March 2026 00:39:09 +0000 (0:00:00.945) 0:00:16.532 ********* 2026-03-10 00:39:10.080353 | orchestrator | =============================================================================== 2026-03-10 00:39:10.080364 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.95s 2026-03-10 00:39:10.080382 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.82s 2026-03-10 00:39:10.080393 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.62s 2026-03-10 00:39:10.080404 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.32s 2026-03-10 00:39:10.080414 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.95s 2026-03-10 00:39:10.080425 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.95s 2026-03-10 00:39:10.080436 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.72s 2026-03-10 00:39:10.080447 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.59s 2026-03-10 00:39:10.080457 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.48s 2026-03-10 00:39:10.080468 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.46s 2026-03-10 00:39:10.080479 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2026-03-10 00:39:10.427202 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-10 00:39:10.463200 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-10 00:39:10.463284 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-10 00:39:10.536369 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 204 0 --:--:-- --:--:-- --:--:-- 208 2026-03-10 00:39:10.551993 | orchestrator | + osism apply --environment custom workarounds 2026-03-10 00:39:12.629479 | orchestrator | 2026-03-10 00:39:12 | INFO  | Trying to run play workarounds in environment custom 2026-03-10 00:39:22.728290 | orchestrator | 2026-03-10 00:39:22 | INFO  | Prepare task for execution of workarounds. 2026-03-10 00:39:22.816473 | orchestrator | 2026-03-10 00:39:22 | INFO  | Task 4160b399-a5b6-40f7-9e89-813a841e7f18 (workarounds) was prepared for execution. 2026-03-10 00:39:22.816612 | orchestrator | 2026-03-10 00:39:22 | INFO  | It takes a moment until task 4160b399-a5b6-40f7-9e89-813a841e7f18 (workarounds) has been started and output is visible here. 2026-03-10 00:39:48.544730 | orchestrator | 2026-03-10 00:39:48.544836 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:39:48.544855 | orchestrator | 2026-03-10 00:39:48.544868 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-10 00:39:48.544880 | orchestrator | Tuesday 10 March 2026 00:39:27 +0000 (0:00:00.133) 0:00:00.133 ********* 2026-03-10 00:39:48.544892 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-10 00:39:48.544904 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-10 00:39:48.544915 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-10 00:39:48.544926 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-10 00:39:48.544937 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-10 00:39:48.544949 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-10 00:39:48.544960 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-10 00:39:48.544990 | orchestrator | 2026-03-10 00:39:48.545002 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-10 00:39:48.545013 | orchestrator | 2026-03-10 00:39:48.545024 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-10 00:39:48.545035 | orchestrator | Tuesday 10 March 2026 00:39:28 +0000 (0:00:00.844) 0:00:00.978 ********* 2026-03-10 00:39:48.545047 | orchestrator | ok: [testbed-manager] 2026-03-10 00:39:48.545059 | orchestrator | 2026-03-10 00:39:48.545070 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-10 00:39:48.545081 | orchestrator | 2026-03-10 00:39:48.545092 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-10 00:39:48.545104 | orchestrator | Tuesday 10 March 2026 00:39:30 +0000 (0:00:02.362) 0:00:03.340 ********* 2026-03-10 00:39:48.545115 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:39:48.545125 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:39:48.545136 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:39:48.545147 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:39:48.545158 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:39:48.545169 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:39:48.545180 | orchestrator | 2026-03-10 00:39:48.545191 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-10 00:39:48.545202 | orchestrator | 2026-03-10 00:39:48.545213 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-10 00:39:48.545224 | orchestrator | Tuesday 10 March 2026 00:39:32 +0000 (0:00:01.754) 0:00:05.094 ********* 2026-03-10 00:39:48.545236 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-10 00:39:48.545247 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-10 00:39:48.545258 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-10 00:39:48.545269 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-10 00:39:48.545280 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-10 00:39:48.545300 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-10 00:39:48.545311 | orchestrator | 2026-03-10 00:39:48.545322 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-10 00:39:48.545334 | orchestrator | Tuesday 10 March 2026 00:39:33 +0000 (0:00:01.467) 0:00:06.562 ********* 2026-03-10 00:39:48.545345 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:39:48.545356 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:39:48.545367 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:39:48.545378 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:39:48.545388 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:39:48.545399 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:39:48.545410 | orchestrator | 2026-03-10 00:39:48.545421 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-10 00:39:48.545432 | orchestrator | Tuesday 10 March 2026 00:39:37 +0000 (0:00:03.708) 0:00:10.271 ********* 2026-03-10 00:39:48.545443 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:39:48.545453 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:39:48.545464 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:39:48.545475 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:39:48.545501 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:39:48.545561 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:39:48.545574 | orchestrator | 2026-03-10 00:39:48.545585 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-10 00:39:48.545596 | orchestrator | 2026-03-10 00:39:48.545607 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-10 00:39:48.545625 | orchestrator | Tuesday 10 March 2026 00:39:38 +0000 (0:00:01.301) 0:00:11.573 ********* 2026-03-10 00:39:48.545637 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:39:48.545648 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:39:48.545658 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:39:48.545669 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:39:48.545680 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:39:48.545690 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:39:48.545701 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:48.545711 | orchestrator | 2026-03-10 00:39:48.545722 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-10 00:39:48.545733 | orchestrator | Tuesday 10 March 2026 00:39:40 +0000 (0:00:01.601) 0:00:13.174 ********* 2026-03-10 00:39:48.545744 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:39:48.545755 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:39:48.545766 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:39:48.545777 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:39:48.545787 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:39:48.545798 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:39:48.545826 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:48.545838 | orchestrator | 2026-03-10 00:39:48.545849 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-10 00:39:48.545860 | orchestrator | Tuesday 10 March 2026 00:39:41 +0000 (0:00:01.398) 0:00:14.573 ********* 2026-03-10 00:39:48.545871 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:39:48.545882 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:39:48.545893 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:39:48.545903 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:39:48.545914 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:39:48.545925 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:39:48.545936 | orchestrator | ok: [testbed-manager] 2026-03-10 00:39:48.545947 | orchestrator | 2026-03-10 00:39:48.545958 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-10 00:39:48.545969 | orchestrator | Tuesday 10 March 2026 00:39:43 +0000 (0:00:01.403) 0:00:15.977 ********* 2026-03-10 00:39:48.545980 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:39:48.545991 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:39:48.546002 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:39:48.546013 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:39:48.546097 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:39:48.546109 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:39:48.546119 | orchestrator | changed: [testbed-manager] 2026-03-10 00:39:48.546130 | orchestrator | 2026-03-10 00:39:48.546141 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-10 00:39:48.546152 | orchestrator | Tuesday 10 March 2026 00:39:44 +0000 (0:00:01.922) 0:00:17.899 ********* 2026-03-10 00:39:48.546163 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:39:48.546174 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:39:48.546185 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:39:48.546195 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:39:48.546206 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:39:48.546217 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:39:48.546228 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:39:48.546239 | orchestrator | 2026-03-10 00:39:48.546250 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-10 00:39:48.546260 | orchestrator | 2026-03-10 00:39:48.546271 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-10 00:39:48.546282 | orchestrator | Tuesday 10 March 2026 00:39:45 +0000 (0:00:00.677) 0:00:18.577 ********* 2026-03-10 00:39:48.546293 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:39:48.546304 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:39:48.546315 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:39:48.546325 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:39:48.546336 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:39:48.546355 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:39:48.546366 | orchestrator | ok: [testbed-manager] 2026-03-10 00:39:48.546376 | orchestrator | 2026-03-10 00:39:48.546387 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:39:48.546400 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:39:48.546412 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:39:48.546424 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:39:48.546440 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:39:48.546452 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:39:48.546463 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:39:48.546473 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:39:48.546484 | orchestrator | 2026-03-10 00:39:48.546495 | orchestrator | 2026-03-10 00:39:48.546506 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:39:48.546538 | orchestrator | Tuesday 10 March 2026 00:39:48 +0000 (0:00:02.870) 0:00:21.447 ********* 2026-03-10 00:39:48.546550 | orchestrator | =============================================================================== 2026-03-10 00:39:48.546561 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.71s 2026-03-10 00:39:48.546572 | orchestrator | Install python3-docker -------------------------------------------------- 2.87s 2026-03-10 00:39:48.546582 | orchestrator | Apply netplan configuration --------------------------------------------- 2.36s 2026-03-10 00:39:48.546593 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.92s 2026-03-10 00:39:48.546604 | orchestrator | Apply netplan configuration --------------------------------------------- 1.75s 2026-03-10 00:39:48.546615 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.60s 2026-03-10 00:39:48.546625 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.47s 2026-03-10 00:39:48.546636 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.40s 2026-03-10 00:39:48.546647 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.40s 2026-03-10 00:39:48.546658 | orchestrator | Run update-ca-trust ----------------------------------------------------- 1.30s 2026-03-10 00:39:48.546669 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.84s 2026-03-10 00:39:48.546688 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.68s 2026-03-10 00:39:49.323091 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-10 00:40:01.562842 | orchestrator | 2026-03-10 00:40:01 | INFO  | Prepare task for execution of reboot. 2026-03-10 00:40:01.658268 | orchestrator | 2026-03-10 00:40:01 | INFO  | Task 2aff08c6-d939-4b9b-bfd9-c5e1fbd78a3a (reboot) was prepared for execution. 2026-03-10 00:40:01.658388 | orchestrator | 2026-03-10 00:40:01 | INFO  | It takes a moment until task 2aff08c6-d939-4b9b-bfd9-c5e1fbd78a3a (reboot) has been started and output is visible here. 2026-03-10 00:40:11.621662 | orchestrator | 2026-03-10 00:40:11.621750 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-10 00:40:11.621765 | orchestrator | 2026-03-10 00:40:11.621776 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-10 00:40:11.621805 | orchestrator | Tuesday 10 March 2026 00:40:05 +0000 (0:00:00.187) 0:00:00.187 ********* 2026-03-10 00:40:11.621815 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:40:11.621826 | orchestrator | 2026-03-10 00:40:11.621835 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-10 00:40:11.621845 | orchestrator | Tuesday 10 March 2026 00:40:06 +0000 (0:00:00.094) 0:00:00.281 ********* 2026-03-10 00:40:11.621855 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:40:11.621864 | orchestrator | 2026-03-10 00:40:11.621874 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-10 00:40:11.621884 | orchestrator | Tuesday 10 March 2026 00:40:06 +0000 (0:00:00.930) 0:00:01.212 ********* 2026-03-10 00:40:11.621893 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:40:11.621903 | orchestrator | 2026-03-10 00:40:11.621913 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-10 00:40:11.621922 | orchestrator | 2026-03-10 00:40:11.621932 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-10 00:40:11.621942 | orchestrator | Tuesday 10 March 2026 00:40:07 +0000 (0:00:00.112) 0:00:01.324 ********* 2026-03-10 00:40:11.621952 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:40:11.621963 | orchestrator | 2026-03-10 00:40:11.621974 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-10 00:40:11.621985 | orchestrator | Tuesday 10 March 2026 00:40:07 +0000 (0:00:00.092) 0:00:01.417 ********* 2026-03-10 00:40:11.621996 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:40:11.622006 | orchestrator | 2026-03-10 00:40:11.622056 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-10 00:40:11.622068 | orchestrator | Tuesday 10 March 2026 00:40:07 +0000 (0:00:00.641) 0:00:02.058 ********* 2026-03-10 00:40:11.622080 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:40:11.622090 | orchestrator | 2026-03-10 00:40:11.622101 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-10 00:40:11.622112 | orchestrator | 2026-03-10 00:40:11.622123 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-10 00:40:11.622134 | orchestrator | Tuesday 10 March 2026 00:40:07 +0000 (0:00:00.096) 0:00:02.154 ********* 2026-03-10 00:40:11.622144 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:40:11.622155 | orchestrator | 2026-03-10 00:40:11.622166 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-10 00:40:11.622177 | orchestrator | Tuesday 10 March 2026 00:40:08 +0000 (0:00:00.205) 0:00:02.359 ********* 2026-03-10 00:40:11.622201 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:40:11.622212 | orchestrator | 2026-03-10 00:40:11.622223 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-10 00:40:11.622233 | orchestrator | Tuesday 10 March 2026 00:40:08 +0000 (0:00:00.633) 0:00:02.992 ********* 2026-03-10 00:40:11.622244 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:40:11.622255 | orchestrator | 2026-03-10 00:40:11.622266 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-10 00:40:11.622277 | orchestrator | 2026-03-10 00:40:11.622288 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-10 00:40:11.622299 | orchestrator | Tuesday 10 March 2026 00:40:08 +0000 (0:00:00.110) 0:00:03.103 ********* 2026-03-10 00:40:11.622309 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:40:11.622320 | orchestrator | 2026-03-10 00:40:11.622331 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-10 00:40:11.622341 | orchestrator | Tuesday 10 March 2026 00:40:08 +0000 (0:00:00.091) 0:00:03.195 ********* 2026-03-10 00:40:11.622352 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:40:11.622363 | orchestrator | 2026-03-10 00:40:11.622374 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-10 00:40:11.622385 | orchestrator | Tuesday 10 March 2026 00:40:09 +0000 (0:00:00.658) 0:00:03.854 ********* 2026-03-10 00:40:11.622395 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:40:11.622413 | orchestrator | 2026-03-10 00:40:11.622424 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-10 00:40:11.622435 | orchestrator | 2026-03-10 00:40:11.622446 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-10 00:40:11.622457 | orchestrator | Tuesday 10 March 2026 00:40:09 +0000 (0:00:00.101) 0:00:03.956 ********* 2026-03-10 00:40:11.622468 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:40:11.622479 | orchestrator | 2026-03-10 00:40:11.622490 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-10 00:40:11.622535 | orchestrator | Tuesday 10 March 2026 00:40:09 +0000 (0:00:00.106) 0:00:04.062 ********* 2026-03-10 00:40:11.622549 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:40:11.622560 | orchestrator | 2026-03-10 00:40:11.622571 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-10 00:40:11.622581 | orchestrator | Tuesday 10 March 2026 00:40:10 +0000 (0:00:00.638) 0:00:04.700 ********* 2026-03-10 00:40:11.622592 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:40:11.622603 | orchestrator | 2026-03-10 00:40:11.622614 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-10 00:40:11.622624 | orchestrator | 2026-03-10 00:40:11.622635 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-10 00:40:11.622646 | orchestrator | Tuesday 10 March 2026 00:40:10 +0000 (0:00:00.102) 0:00:04.803 ********* 2026-03-10 00:40:11.622657 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:40:11.622667 | orchestrator | 2026-03-10 00:40:11.622678 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-10 00:40:11.622689 | orchestrator | Tuesday 10 March 2026 00:40:10 +0000 (0:00:00.102) 0:00:04.905 ********* 2026-03-10 00:40:11.622700 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:40:11.622710 | orchestrator | 2026-03-10 00:40:11.622721 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-10 00:40:11.622733 | orchestrator | Tuesday 10 March 2026 00:40:11 +0000 (0:00:00.662) 0:00:05.568 ********* 2026-03-10 00:40:11.622760 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:40:11.622772 | orchestrator | 2026-03-10 00:40:11.622783 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:40:11.622795 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:40:11.622806 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:40:11.622817 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:40:11.622828 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:40:11.622839 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:40:11.622850 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:40:11.622860 | orchestrator | 2026-03-10 00:40:11.622871 | orchestrator | 2026-03-10 00:40:11.622882 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:40:11.622893 | orchestrator | Tuesday 10 March 2026 00:40:11 +0000 (0:00:00.034) 0:00:05.603 ********* 2026-03-10 00:40:11.622904 | orchestrator | =============================================================================== 2026-03-10 00:40:11.622915 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.16s 2026-03-10 00:40:11.622925 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.69s 2026-03-10 00:40:11.622943 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.56s 2026-03-10 00:40:11.876569 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-10 00:40:23.933822 | orchestrator | 2026-03-10 00:40:23 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-10 00:40:24.003636 | orchestrator | 2026-03-10 00:40:24 | INFO  | Task d6f80aa8-f030-47d3-924b-f52f9fe2d7fc (wait-for-connection) was prepared for execution. 2026-03-10 00:40:24.003744 | orchestrator | 2026-03-10 00:40:24 | INFO  | It takes a moment until task d6f80aa8-f030-47d3-924b-f52f9fe2d7fc (wait-for-connection) has been started and output is visible here. 2026-03-10 00:40:40.455592 | orchestrator | 2026-03-10 00:40:40.455690 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-10 00:40:40.455699 | orchestrator | 2026-03-10 00:40:40.455704 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-10 00:40:40.455709 | orchestrator | Tuesday 10 March 2026 00:40:28 +0000 (0:00:00.214) 0:00:00.214 ********* 2026-03-10 00:40:40.455713 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:40:40.455718 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:40:40.455723 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:40:40.455727 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:40:40.455731 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:40:40.455735 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:40:40.455738 | orchestrator | 2026-03-10 00:40:40.455742 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:40:40.455747 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:40:40.455753 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:40:40.455757 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:40:40.455761 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:40:40.455765 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:40:40.455769 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:40:40.455772 | orchestrator | 2026-03-10 00:40:40.455776 | orchestrator | 2026-03-10 00:40:40.455780 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:40:40.455784 | orchestrator | Tuesday 10 March 2026 00:40:40 +0000 (0:00:11.575) 0:00:11.790 ********* 2026-03-10 00:40:40.455788 | orchestrator | =============================================================================== 2026-03-10 00:40:40.455791 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.58s 2026-03-10 00:40:40.889779 | orchestrator | + osism apply hddtemp 2026-03-10 00:40:53.170220 | orchestrator | 2026-03-10 00:40:53 | INFO  | Prepare task for execution of hddtemp. 2026-03-10 00:40:53.243618 | orchestrator | 2026-03-10 00:40:53 | INFO  | Task 8ff7495a-981c-48f9-be6f-4cac6c395900 (hddtemp) was prepared for execution. 2026-03-10 00:40:53.243741 | orchestrator | 2026-03-10 00:40:53 | INFO  | It takes a moment until task 8ff7495a-981c-48f9-be6f-4cac6c395900 (hddtemp) has been started and output is visible here. 2026-03-10 00:41:21.588591 | orchestrator | 2026-03-10 00:41:21.588731 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-10 00:41:21.588759 | orchestrator | 2026-03-10 00:41:21.588777 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-10 00:41:21.588794 | orchestrator | Tuesday 10 March 2026 00:40:57 +0000 (0:00:00.276) 0:00:00.276 ********* 2026-03-10 00:41:21.588848 | orchestrator | ok: [testbed-manager] 2026-03-10 00:41:21.588862 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:41:21.588873 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:41:21.588890 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:41:21.588909 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:41:21.588927 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:41:21.588945 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:41:21.588962 | orchestrator | 2026-03-10 00:41:21.588980 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-10 00:41:21.588998 | orchestrator | Tuesday 10 March 2026 00:40:58 +0000 (0:00:00.733) 0:00:01.010 ********* 2026-03-10 00:41:21.589016 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:41:21.589036 | orchestrator | 2026-03-10 00:41:21.589053 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-10 00:41:21.589072 | orchestrator | Tuesday 10 March 2026 00:40:59 +0000 (0:00:01.097) 0:00:02.107 ********* 2026-03-10 00:41:21.589091 | orchestrator | ok: [testbed-manager] 2026-03-10 00:41:21.589108 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:41:21.589125 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:41:21.589141 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:41:21.589161 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:41:21.589186 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:41:21.589206 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:41:21.589224 | orchestrator | 2026-03-10 00:41:21.589241 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-10 00:41:21.589257 | orchestrator | Tuesday 10 March 2026 00:41:01 +0000 (0:00:01.744) 0:00:03.852 ********* 2026-03-10 00:41:21.589274 | orchestrator | changed: [testbed-manager] 2026-03-10 00:41:21.589293 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:41:21.589309 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:41:21.589327 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:41:21.589348 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:41:21.589366 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:41:21.589382 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:41:21.589400 | orchestrator | 2026-03-10 00:41:21.589436 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-10 00:41:21.589451 | orchestrator | Tuesday 10 March 2026 00:41:02 +0000 (0:00:01.310) 0:00:05.162 ********* 2026-03-10 00:41:21.589493 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:41:21.589518 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:41:21.589535 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:41:21.589551 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:41:21.589568 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:41:21.589585 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:41:21.589602 | orchestrator | ok: [testbed-manager] 2026-03-10 00:41:21.589619 | orchestrator | 2026-03-10 00:41:21.589629 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-10 00:41:21.589639 | orchestrator | Tuesday 10 March 2026 00:41:03 +0000 (0:00:01.217) 0:00:06.379 ********* 2026-03-10 00:41:21.589649 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:41:21.589662 | orchestrator | changed: [testbed-manager] 2026-03-10 00:41:21.589678 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:41:21.589693 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:41:21.589709 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:41:21.589725 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:41:21.589742 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:41:21.589753 | orchestrator | 2026-03-10 00:41:21.589763 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-10 00:41:21.589775 | orchestrator | Tuesday 10 March 2026 00:41:04 +0000 (0:00:00.869) 0:00:07.249 ********* 2026-03-10 00:41:21.589792 | orchestrator | changed: [testbed-manager] 2026-03-10 00:41:21.589819 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:41:21.589837 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:41:21.589854 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:41:21.589865 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:41:21.589877 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:41:21.589893 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:41:21.589910 | orchestrator | 2026-03-10 00:41:21.589927 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-10 00:41:21.589945 | orchestrator | Tuesday 10 March 2026 00:41:17 +0000 (0:00:12.802) 0:00:20.052 ********* 2026-03-10 00:41:21.589964 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:41:21.589982 | orchestrator | 2026-03-10 00:41:21.589998 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-10 00:41:21.590092 | orchestrator | Tuesday 10 March 2026 00:41:18 +0000 (0:00:01.302) 0:00:21.354 ********* 2026-03-10 00:41:21.590115 | orchestrator | changed: [testbed-manager] 2026-03-10 00:41:21.590129 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:41:21.590139 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:41:21.590149 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:41:21.590166 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:41:21.590181 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:41:21.590197 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:41:21.590213 | orchestrator | 2026-03-10 00:41:21.590228 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:41:21.590244 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:41:21.590290 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:41:21.590309 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:41:21.590330 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:41:21.590347 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:41:21.590365 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:41:21.590381 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:41:21.590397 | orchestrator | 2026-03-10 00:41:21.590413 | orchestrator | 2026-03-10 00:41:21.590429 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:41:21.590445 | orchestrator | Tuesday 10 March 2026 00:41:21 +0000 (0:00:02.145) 0:00:23.500 ********* 2026-03-10 00:41:21.590461 | orchestrator | =============================================================================== 2026-03-10 00:41:21.590519 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 12.80s 2026-03-10 00:41:21.590536 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 2.15s 2026-03-10 00:41:21.590551 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 1.74s 2026-03-10 00:41:21.590567 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.31s 2026-03-10 00:41:21.590583 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.30s 2026-03-10 00:41:21.590598 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.22s 2026-03-10 00:41:21.590629 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.10s 2026-03-10 00:41:21.590655 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.87s 2026-03-10 00:41:21.590672 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.73s 2026-03-10 00:41:22.015901 | orchestrator | ++ semver latest 7.1.1 2026-03-10 00:41:22.065872 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-10 00:41:22.065953 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-10 00:41:22.065964 | orchestrator | + sudo systemctl restart manager.service 2026-03-10 00:41:36.200084 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-10 00:41:36.200177 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-10 00:41:36.200194 | orchestrator | + local max_attempts=60 2026-03-10 00:41:36.200206 | orchestrator | + local name=ceph-ansible 2026-03-10 00:41:36.200217 | orchestrator | + local attempt_num=1 2026-03-10 00:41:36.200228 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:41:36.240528 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:41:36.240611 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:41:36.240627 | orchestrator | + sleep 5 2026-03-10 00:41:41.245313 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:41:41.286218 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:41:41.286300 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:41:41.286318 | orchestrator | + sleep 5 2026-03-10 00:41:46.289660 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:41:46.324529 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:41:46.324613 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:41:46.324628 | orchestrator | + sleep 5 2026-03-10 00:41:51.329028 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:41:51.374124 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:41:51.374228 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:41:51.374244 | orchestrator | + sleep 5 2026-03-10 00:41:56.379883 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:41:56.412867 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:41:56.412976 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:41:56.413002 | orchestrator | + sleep 5 2026-03-10 00:42:01.418292 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:42:01.454747 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:42:01.454850 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:42:01.454866 | orchestrator | + sleep 5 2026-03-10 00:42:06.460140 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:42:06.504593 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:42:06.504712 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:42:06.504730 | orchestrator | + sleep 5 2026-03-10 00:42:11.509178 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:42:11.542621 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-10 00:42:11.542709 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:42:11.542721 | orchestrator | + sleep 5 2026-03-10 00:42:16.548733 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:42:16.584583 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-10 00:42:16.584671 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:42:16.584686 | orchestrator | + sleep 5 2026-03-10 00:42:21.588270 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:42:21.627895 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-10 00:42:21.627995 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:42:21.628010 | orchestrator | + sleep 5 2026-03-10 00:42:26.632506 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:42:26.674183 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-10 00:42:26.674265 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:42:26.674276 | orchestrator | + sleep 5 2026-03-10 00:42:31.680034 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:42:31.717992 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-10 00:42:31.718184 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:42:31.718251 | orchestrator | + sleep 5 2026-03-10 00:42:36.723569 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:42:36.750700 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-10 00:42:36.750790 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-10 00:42:36.750805 | orchestrator | + sleep 5 2026-03-10 00:42:41.755414 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-10 00:42:41.797976 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:42:41.798281 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-10 00:42:41.798304 | orchestrator | + local max_attempts=60 2026-03-10 00:42:41.798316 | orchestrator | + local name=kolla-ansible 2026-03-10 00:42:41.798328 | orchestrator | + local attempt_num=1 2026-03-10 00:42:41.798353 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-10 00:42:41.843074 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:42:41.843195 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-10 00:42:41.843216 | orchestrator | + local max_attempts=60 2026-03-10 00:42:41.843233 | orchestrator | + local name=osism-ansible 2026-03-10 00:42:41.843250 | orchestrator | + local attempt_num=1 2026-03-10 00:42:41.843267 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-10 00:42:41.875900 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-10 00:42:41.875995 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-10 00:42:41.876010 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-10 00:42:42.074275 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-10 00:42:42.235871 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-10 00:42:42.407868 | orchestrator | ARA in osism-ansible already disabled. 2026-03-10 00:42:42.573782 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-10 00:42:42.574757 | orchestrator | + osism apply gather-facts 2026-03-10 00:42:55.010575 | orchestrator | 2026-03-10 00:42:55 | INFO  | Prepare task for execution of gather-facts. 2026-03-10 00:42:55.077695 | orchestrator | 2026-03-10 00:42:55 | INFO  | Task b050d2ea-7a69-40b7-a46a-2e5ca2cc7ac2 (gather-facts) was prepared for execution. 2026-03-10 00:42:55.077798 | orchestrator | 2026-03-10 00:42:55 | INFO  | It takes a moment until task b050d2ea-7a69-40b7-a46a-2e5ca2cc7ac2 (gather-facts) has been started and output is visible here. 2026-03-10 00:43:08.392472 | orchestrator | 2026-03-10 00:43:08.392589 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-10 00:43:08.392606 | orchestrator | 2026-03-10 00:43:08.392618 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-10 00:43:08.392630 | orchestrator | Tuesday 10 March 2026 00:42:59 +0000 (0:00:00.229) 0:00:00.229 ********* 2026-03-10 00:43:08.392641 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:43:08.392653 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:43:08.392664 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:43:08.392675 | orchestrator | ok: [testbed-manager] 2026-03-10 00:43:08.392686 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:43:08.392697 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:43:08.392708 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:43:08.392718 | orchestrator | 2026-03-10 00:43:08.392745 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-10 00:43:08.392756 | orchestrator | 2026-03-10 00:43:08.392767 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-10 00:43:08.392789 | orchestrator | Tuesday 10 March 2026 00:43:07 +0000 (0:00:07.895) 0:00:08.125 ********* 2026-03-10 00:43:08.392801 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:43:08.392813 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:43:08.392824 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:43:08.392835 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:43:08.392846 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:08.392857 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:43:08.392867 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:43:08.392882 | orchestrator | 2026-03-10 00:43:08.392901 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:43:08.392955 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:43:08.392978 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:43:08.392999 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:43:08.393040 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:43:08.393060 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:43:08.393071 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:43:08.393082 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 00:43:08.393093 | orchestrator | 2026-03-10 00:43:08.393104 | orchestrator | 2026-03-10 00:43:08.393115 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:43:08.393126 | orchestrator | Tuesday 10 March 2026 00:43:07 +0000 (0:00:00.601) 0:00:08.726 ********* 2026-03-10 00:43:08.393137 | orchestrator | =============================================================================== 2026-03-10 00:43:08.393147 | orchestrator | Gathers facts about hosts ----------------------------------------------- 7.90s 2026-03-10 00:43:08.393158 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.60s 2026-03-10 00:43:08.768637 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-10 00:43:08.781760 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-10 00:43:08.793329 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-10 00:43:08.805586 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-10 00:43:08.827623 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-10 00:43:08.842933 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-10 00:43:08.859060 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-10 00:43:08.874801 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-10 00:43:08.887884 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-10 00:43:08.901945 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-10 00:43:08.914951 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-10 00:43:08.934671 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-10 00:43:08.954670 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-10 00:43:08.969248 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-10 00:43:08.982485 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-10 00:43:09.004753 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-10 00:43:09.018232 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-10 00:43:09.035664 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-10 00:43:09.046510 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-10 00:43:09.060045 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-10 00:43:09.073862 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-10 00:43:09.089960 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-10 00:43:09.104164 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-10 00:43:09.118202 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-10 00:43:09.304697 | orchestrator | ok: Runtime: 0:24:48.759281 2026-03-10 00:43:09.412802 | 2026-03-10 00:43:09.412950 | TASK [Deploy services] 2026-03-10 00:43:09.948445 | orchestrator | skipping: Conditional result was False 2026-03-10 00:43:09.965217 | 2026-03-10 00:43:09.965396 | TASK [Deploy in a nutshell] 2026-03-10 00:43:10.722587 | orchestrator | + set -e 2026-03-10 00:43:10.722749 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-10 00:43:10.722768 | orchestrator | ++ export INTERACTIVE=false 2026-03-10 00:43:10.722782 | orchestrator | ++ INTERACTIVE=false 2026-03-10 00:43:10.722793 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-10 00:43:10.722802 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-10 00:43:10.722813 | orchestrator | + source /opt/manager-vars.sh 2026-03-10 00:43:10.722846 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-10 00:43:10.722870 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-10 00:43:10.722880 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-10 00:43:10.722892 | orchestrator | ++ CEPH_VERSION=reef 2026-03-10 00:43:10.722901 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-10 00:43:10.722915 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-10 00:43:10.722924 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-10 00:43:10.722935 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-10 00:43:10.722940 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-10 00:43:10.722948 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-10 00:43:10.722954 | orchestrator | ++ export ARA=false 2026-03-10 00:43:10.722963 | orchestrator | ++ ARA=false 2026-03-10 00:43:10.722971 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-10 00:43:10.722979 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-10 00:43:10.723000 | orchestrator | ++ export TEMPEST=true 2026-03-10 00:43:10.723009 | orchestrator | ++ TEMPEST=true 2026-03-10 00:43:10.723016 | orchestrator | ++ export IS_ZUUL=true 2026-03-10 00:43:10.723025 | orchestrator | ++ IS_ZUUL=true 2026-03-10 00:43:10.723033 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2026-03-10 00:43:10.723042 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.226 2026-03-10 00:43:10.723050 | orchestrator | ++ export EXTERNAL_API=false 2026-03-10 00:43:10.723058 | orchestrator | ++ EXTERNAL_API=false 2026-03-10 00:43:10.723065 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-10 00:43:10.723073 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-10 00:43:10.723080 | orchestrator | 2026-03-10 00:43:10.723089 | orchestrator | # PULL IMAGES 2026-03-10 00:43:10.723097 | orchestrator | 2026-03-10 00:43:10.723105 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-10 00:43:10.723113 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-10 00:43:10.723121 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-10 00:43:10.723136 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-10 00:43:10.723145 | orchestrator | + echo 2026-03-10 00:43:10.723153 | orchestrator | + echo '# PULL IMAGES' 2026-03-10 00:43:10.723161 | orchestrator | + echo 2026-03-10 00:43:10.724341 | orchestrator | ++ semver latest 7.0.0 2026-03-10 00:43:10.781282 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-10 00:43:10.781395 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-10 00:43:10.781444 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-10 00:43:12.828102 | orchestrator | 2026-03-10 00:43:12 | INFO  | Trying to run play pull-images in environment custom 2026-03-10 00:43:22.954665 | orchestrator | 2026-03-10 00:43:22 | INFO  | Prepare task for execution of pull-images. 2026-03-10 00:43:23.042867 | orchestrator | 2026-03-10 00:43:23 | INFO  | Task 6f6371f7-98d4-4e55-aa34-e220862b60ba (pull-images) was prepared for execution. 2026-03-10 00:43:23.042951 | orchestrator | 2026-03-10 00:43:23 | INFO  | Task 6f6371f7-98d4-4e55-aa34-e220862b60ba is running in background. No more output. Check ARA for logs. 2026-03-10 00:43:25.682213 | orchestrator | 2026-03-10 00:43:25 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-10 00:43:35.802330 | orchestrator | 2026-03-10 00:43:35 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-10 00:43:35.879827 | orchestrator | 2026-03-10 00:43:35 | INFO  | Task 9377fe77-19a4-4df0-948e-fe9ed731dc00 (wipe-partitions) was prepared for execution. 2026-03-10 00:43:35.879910 | orchestrator | 2026-03-10 00:43:35 | INFO  | It takes a moment until task 9377fe77-19a4-4df0-948e-fe9ed731dc00 (wipe-partitions) has been started and output is visible here. 2026-03-10 00:43:48.083079 | orchestrator | 2026-03-10 00:43:48.083234 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-10 00:43:48.083261 | orchestrator | 2026-03-10 00:43:48.083274 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-10 00:43:48.083291 | orchestrator | Tuesday 10 March 2026 00:43:40 +0000 (0:00:00.128) 0:00:00.128 ********* 2026-03-10 00:43:48.083333 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:43:48.083346 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:43:48.083356 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:43:48.083367 | orchestrator | 2026-03-10 00:43:48.083378 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-10 00:43:48.083389 | orchestrator | Tuesday 10 March 2026 00:43:40 +0000 (0:00:00.560) 0:00:00.688 ********* 2026-03-10 00:43:48.083456 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:48.083470 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:43:48.083481 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:43:48.083492 | orchestrator | 2026-03-10 00:43:48.083503 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-10 00:43:48.083514 | orchestrator | Tuesday 10 March 2026 00:43:40 +0000 (0:00:00.314) 0:00:01.003 ********* 2026-03-10 00:43:48.083525 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:43:48.083536 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:43:48.083547 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:43:48.083557 | orchestrator | 2026-03-10 00:43:48.083568 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-10 00:43:48.083579 | orchestrator | Tuesday 10 March 2026 00:43:41 +0000 (0:00:00.547) 0:00:01.550 ********* 2026-03-10 00:43:48.083590 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:43:48.083604 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:43:48.083617 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:43:48.083629 | orchestrator | 2026-03-10 00:43:48.083642 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-10 00:43:48.083655 | orchestrator | Tuesday 10 March 2026 00:43:41 +0000 (0:00:00.260) 0:00:01.811 ********* 2026-03-10 00:43:48.083668 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-10 00:43:48.083685 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-10 00:43:48.083698 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-10 00:43:48.083711 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-10 00:43:48.083725 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-10 00:43:48.083737 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-10 00:43:48.083750 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-10 00:43:48.083763 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-10 00:43:48.083776 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-10 00:43:48.083789 | orchestrator | 2026-03-10 00:43:48.083802 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-10 00:43:48.083816 | orchestrator | Tuesday 10 March 2026 00:43:42 +0000 (0:00:01.153) 0:00:02.965 ********* 2026-03-10 00:43:48.083829 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-10 00:43:48.083842 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-10 00:43:48.083855 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-10 00:43:48.083868 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-10 00:43:48.083880 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-10 00:43:48.083893 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-10 00:43:48.083905 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-10 00:43:48.083918 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-10 00:43:48.083930 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-10 00:43:48.083943 | orchestrator | 2026-03-10 00:43:48.083962 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-10 00:43:48.083974 | orchestrator | Tuesday 10 March 2026 00:43:44 +0000 (0:00:01.546) 0:00:04.511 ********* 2026-03-10 00:43:48.083984 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-10 00:43:48.083996 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-10 00:43:48.084006 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-10 00:43:48.084017 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-10 00:43:48.084037 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-10 00:43:48.084049 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-10 00:43:48.084059 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-10 00:43:48.084070 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-10 00:43:48.084081 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-10 00:43:48.084092 | orchestrator | 2026-03-10 00:43:48.084103 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-10 00:43:48.084114 | orchestrator | Tuesday 10 March 2026 00:43:46 +0000 (0:00:02.041) 0:00:06.552 ********* 2026-03-10 00:43:48.084124 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:43:48.084135 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:43:48.084146 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:43:48.084157 | orchestrator | 2026-03-10 00:43:48.084167 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-10 00:43:48.084179 | orchestrator | Tuesday 10 March 2026 00:43:47 +0000 (0:00:00.605) 0:00:07.158 ********* 2026-03-10 00:43:48.084189 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:43:48.084200 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:43:48.084211 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:43:48.084222 | orchestrator | 2026-03-10 00:43:48.084233 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:43:48.084246 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:43:48.084258 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:43:48.084289 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:43:48.084301 | orchestrator | 2026-03-10 00:43:48.084312 | orchestrator | 2026-03-10 00:43:48.084323 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:43:48.084334 | orchestrator | Tuesday 10 March 2026 00:43:47 +0000 (0:00:00.617) 0:00:07.776 ********* 2026-03-10 00:43:48.084345 | orchestrator | =============================================================================== 2026-03-10 00:43:48.084355 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.04s 2026-03-10 00:43:48.084366 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.55s 2026-03-10 00:43:48.084377 | orchestrator | Check device availability ----------------------------------------------- 1.15s 2026-03-10 00:43:48.084388 | orchestrator | Request device events from the kernel ----------------------------------- 0.62s 2026-03-10 00:43:48.084440 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2026-03-10 00:43:48.084462 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.56s 2026-03-10 00:43:48.084482 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.55s 2026-03-10 00:43:48.084500 | orchestrator | Remove all rook related logical devices --------------------------------- 0.31s 2026-03-10 00:43:48.084514 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.26s 2026-03-10 00:44:00.357012 | orchestrator | 2026-03-10 00:44:00 | INFO  | Prepare task for execution of facts. 2026-03-10 00:44:00.425990 | orchestrator | 2026-03-10 00:44:00 | INFO  | Task 32ec66a0-a0b4-4d08-b59e-214793b60446 (facts) was prepared for execution. 2026-03-10 00:44:00.426091 | orchestrator | 2026-03-10 00:44:00 | INFO  | It takes a moment until task 32ec66a0-a0b4-4d08-b59e-214793b60446 (facts) has been started and output is visible here. 2026-03-10 00:44:12.959549 | orchestrator | 2026-03-10 00:44:12.959636 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-10 00:44:12.959646 | orchestrator | 2026-03-10 00:44:12.959673 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-10 00:44:12.959680 | orchestrator | Tuesday 10 March 2026 00:44:04 +0000 (0:00:00.290) 0:00:00.290 ********* 2026-03-10 00:44:12.959685 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:44:12.959691 | orchestrator | ok: [testbed-manager] 2026-03-10 00:44:12.959697 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:44:12.959702 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:44:12.959708 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:12.959713 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:12.959718 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:44:12.959723 | orchestrator | 2026-03-10 00:44:12.959729 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-10 00:44:12.959734 | orchestrator | Tuesday 10 March 2026 00:44:06 +0000 (0:00:01.123) 0:00:01.413 ********* 2026-03-10 00:44:12.959740 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:44:12.959746 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:44:12.959752 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:44:12.959757 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:44:12.959762 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.959768 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:12.959773 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:12.959778 | orchestrator | 2026-03-10 00:44:12.959784 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-10 00:44:12.959801 | orchestrator | 2026-03-10 00:44:12.959807 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-10 00:44:12.959814 | orchestrator | Tuesday 10 March 2026 00:44:07 +0000 (0:00:01.343) 0:00:02.757 ********* 2026-03-10 00:44:12.959819 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:44:12.959823 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:44:12.959828 | orchestrator | ok: [testbed-manager] 2026-03-10 00:44:12.959833 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:44:12.959838 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:44:12.959842 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:12.959847 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:12.959852 | orchestrator | 2026-03-10 00:44:12.959857 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-10 00:44:12.959861 | orchestrator | 2026-03-10 00:44:12.959866 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-10 00:44:12.959871 | orchestrator | Tuesday 10 March 2026 00:44:12 +0000 (0:00:04.707) 0:00:07.464 ********* 2026-03-10 00:44:12.959876 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:44:12.959880 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:44:12.959885 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:44:12.959890 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:44:12.959894 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:12.959899 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:12.959904 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:44:12.959908 | orchestrator | 2026-03-10 00:44:12.959913 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:44:12.959918 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:44:12.959925 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:44:12.959930 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:44:12.959935 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:44:12.959940 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:44:12.959948 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:44:12.959953 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:44:12.959958 | orchestrator | 2026-03-10 00:44:12.959963 | orchestrator | 2026-03-10 00:44:12.959967 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:44:12.959972 | orchestrator | Tuesday 10 March 2026 00:44:12 +0000 (0:00:00.474) 0:00:07.939 ********* 2026-03-10 00:44:12.959977 | orchestrator | =============================================================================== 2026-03-10 00:44:12.959982 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.71s 2026-03-10 00:44:12.959987 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.34s 2026-03-10 00:44:12.959992 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.12s 2026-03-10 00:44:12.959996 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2026-03-10 00:44:15.645093 | orchestrator | 2026-03-10 00:44:15 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-10 00:44:15.715465 | orchestrator | 2026-03-10 00:44:15 | INFO  | Task f1142145-cdd3-4394-9d14-d3b9212076ce (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-10 00:44:15.715561 | orchestrator | 2026-03-10 00:44:15 | INFO  | It takes a moment until task f1142145-cdd3-4394-9d14-d3b9212076ce (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-10 00:44:28.258164 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-10 00:44:28.258277 | orchestrator | 2.16.14 2026-03-10 00:44:28.258293 | orchestrator | 2026-03-10 00:44:28.258305 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-10 00:44:28.258317 | orchestrator | 2026-03-10 00:44:28.258328 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-10 00:44:28.258339 | orchestrator | Tuesday 10 March 2026 00:44:20 +0000 (0:00:00.354) 0:00:00.354 ********* 2026-03-10 00:44:28.258350 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-10 00:44:28.258361 | orchestrator | 2026-03-10 00:44:28.258372 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-10 00:44:28.258465 | orchestrator | Tuesday 10 March 2026 00:44:20 +0000 (0:00:00.275) 0:00:00.630 ********* 2026-03-10 00:44:28.258484 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:28.258503 | orchestrator | 2026-03-10 00:44:28.258521 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.258537 | orchestrator | Tuesday 10 March 2026 00:44:20 +0000 (0:00:00.259) 0:00:00.890 ********* 2026-03-10 00:44:28.258569 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-10 00:44:28.258589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-10 00:44:28.258608 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-10 00:44:28.258629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-10 00:44:28.258649 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-10 00:44:28.258669 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-10 00:44:28.258683 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-10 00:44:28.258696 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-10 00:44:28.258708 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-10 00:44:28.258721 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-10 00:44:28.258771 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-10 00:44:28.258792 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-10 00:44:28.258810 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-10 00:44:28.258825 | orchestrator | 2026-03-10 00:44:28.258837 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.258850 | orchestrator | Tuesday 10 March 2026 00:44:21 +0000 (0:00:00.496) 0:00:01.386 ********* 2026-03-10 00:44:28.258862 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.258874 | orchestrator | 2026-03-10 00:44:28.258886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.258898 | orchestrator | Tuesday 10 March 2026 00:44:21 +0000 (0:00:00.210) 0:00:01.596 ********* 2026-03-10 00:44:28.258910 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.258922 | orchestrator | 2026-03-10 00:44:28.258936 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.258953 | orchestrator | Tuesday 10 March 2026 00:44:21 +0000 (0:00:00.207) 0:00:01.804 ********* 2026-03-10 00:44:28.258964 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.258975 | orchestrator | 2026-03-10 00:44:28.258985 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.258996 | orchestrator | Tuesday 10 March 2026 00:44:22 +0000 (0:00:00.247) 0:00:02.052 ********* 2026-03-10 00:44:28.259007 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.259018 | orchestrator | 2026-03-10 00:44:28.259029 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.259039 | orchestrator | Tuesday 10 March 2026 00:44:22 +0000 (0:00:00.211) 0:00:02.263 ********* 2026-03-10 00:44:28.259050 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.259060 | orchestrator | 2026-03-10 00:44:28.259071 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.259081 | orchestrator | Tuesday 10 March 2026 00:44:22 +0000 (0:00:00.201) 0:00:02.465 ********* 2026-03-10 00:44:28.259092 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.259102 | orchestrator | 2026-03-10 00:44:28.259113 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.259123 | orchestrator | Tuesday 10 March 2026 00:44:22 +0000 (0:00:00.234) 0:00:02.699 ********* 2026-03-10 00:44:28.259134 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.259145 | orchestrator | 2026-03-10 00:44:28.259155 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.259166 | orchestrator | Tuesday 10 March 2026 00:44:22 +0000 (0:00:00.196) 0:00:02.895 ********* 2026-03-10 00:44:28.259176 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.259187 | orchestrator | 2026-03-10 00:44:28.259197 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.259208 | orchestrator | Tuesday 10 March 2026 00:44:23 +0000 (0:00:00.250) 0:00:03.146 ********* 2026-03-10 00:44:28.259219 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba) 2026-03-10 00:44:28.259230 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba) 2026-03-10 00:44:28.259241 | orchestrator | 2026-03-10 00:44:28.259251 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.259282 | orchestrator | Tuesday 10 March 2026 00:44:23 +0000 (0:00:00.442) 0:00:03.589 ********* 2026-03-10 00:44:28.259294 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8f76f090-a1e0-42c3-8072-1f51d4df9a8c) 2026-03-10 00:44:28.259305 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8f76f090-a1e0-42c3-8072-1f51d4df9a8c) 2026-03-10 00:44:28.259315 | orchestrator | 2026-03-10 00:44:28.259332 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.259352 | orchestrator | Tuesday 10 March 2026 00:44:24 +0000 (0:00:00.670) 0:00:04.259 ********* 2026-03-10 00:44:28.259363 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e4712c11-e6a0-4829-954c-3e21e73d266a) 2026-03-10 00:44:28.259374 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e4712c11-e6a0-4829-954c-3e21e73d266a) 2026-03-10 00:44:28.259416 | orchestrator | 2026-03-10 00:44:28.259427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.259438 | orchestrator | Tuesday 10 March 2026 00:44:25 +0000 (0:00:00.685) 0:00:04.945 ********* 2026-03-10 00:44:28.259448 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5b638158-044f-4e2c-a80d-2256f7b00733) 2026-03-10 00:44:28.259459 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5b638158-044f-4e2c-a80d-2256f7b00733) 2026-03-10 00:44:28.259470 | orchestrator | 2026-03-10 00:44:28.259480 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:28.259491 | orchestrator | Tuesday 10 March 2026 00:44:25 +0000 (0:00:00.949) 0:00:05.894 ********* 2026-03-10 00:44:28.259502 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-10 00:44:28.259513 | orchestrator | 2026-03-10 00:44:28.259523 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:28.259534 | orchestrator | Tuesday 10 March 2026 00:44:26 +0000 (0:00:00.370) 0:00:06.264 ********* 2026-03-10 00:44:28.259544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-10 00:44:28.259555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-10 00:44:28.259566 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-10 00:44:28.259576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-10 00:44:28.259586 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-10 00:44:28.259597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-10 00:44:28.259607 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-10 00:44:28.259618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-10 00:44:28.259629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-10 00:44:28.259639 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-10 00:44:28.259650 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-10 00:44:28.259661 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-10 00:44:28.259671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-10 00:44:28.259681 | orchestrator | 2026-03-10 00:44:28.259692 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:28.259703 | orchestrator | Tuesday 10 March 2026 00:44:26 +0000 (0:00:00.395) 0:00:06.660 ********* 2026-03-10 00:44:28.259713 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.259724 | orchestrator | 2026-03-10 00:44:28.259734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:28.259745 | orchestrator | Tuesday 10 March 2026 00:44:26 +0000 (0:00:00.214) 0:00:06.875 ********* 2026-03-10 00:44:28.259756 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.259766 | orchestrator | 2026-03-10 00:44:28.259777 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:28.259788 | orchestrator | Tuesday 10 March 2026 00:44:27 +0000 (0:00:00.220) 0:00:07.095 ********* 2026-03-10 00:44:28.259798 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.259817 | orchestrator | 2026-03-10 00:44:28.259828 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:28.259839 | orchestrator | Tuesday 10 March 2026 00:44:27 +0000 (0:00:00.214) 0:00:07.309 ********* 2026-03-10 00:44:28.259850 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.259860 | orchestrator | 2026-03-10 00:44:28.259871 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:28.259881 | orchestrator | Tuesday 10 March 2026 00:44:27 +0000 (0:00:00.203) 0:00:07.513 ********* 2026-03-10 00:44:28.259892 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.259902 | orchestrator | 2026-03-10 00:44:28.259913 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:28.259923 | orchestrator | Tuesday 10 March 2026 00:44:27 +0000 (0:00:00.222) 0:00:07.736 ********* 2026-03-10 00:44:28.259934 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.259945 | orchestrator | 2026-03-10 00:44:28.259955 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:28.259966 | orchestrator | Tuesday 10 March 2026 00:44:28 +0000 (0:00:00.208) 0:00:07.946 ********* 2026-03-10 00:44:28.259977 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:28.259987 | orchestrator | 2026-03-10 00:44:28.260004 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:36.096623 | orchestrator | Tuesday 10 March 2026 00:44:28 +0000 (0:00:00.204) 0:00:08.150 ********* 2026-03-10 00:44:36.096723 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.096745 | orchestrator | 2026-03-10 00:44:36.096781 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:36.096790 | orchestrator | Tuesday 10 March 2026 00:44:28 +0000 (0:00:00.196) 0:00:08.347 ********* 2026-03-10 00:44:36.096797 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-10 00:44:36.096804 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-10 00:44:36.096812 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-10 00:44:36.096818 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-10 00:44:36.096825 | orchestrator | 2026-03-10 00:44:36.096831 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:36.096851 | orchestrator | Tuesday 10 March 2026 00:44:29 +0000 (0:00:01.180) 0:00:09.527 ********* 2026-03-10 00:44:36.096858 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.096864 | orchestrator | 2026-03-10 00:44:36.096870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:36.096877 | orchestrator | Tuesday 10 March 2026 00:44:29 +0000 (0:00:00.234) 0:00:09.761 ********* 2026-03-10 00:44:36.096883 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.096889 | orchestrator | 2026-03-10 00:44:36.096895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:36.096901 | orchestrator | Tuesday 10 March 2026 00:44:30 +0000 (0:00:00.201) 0:00:09.962 ********* 2026-03-10 00:44:36.096908 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.096914 | orchestrator | 2026-03-10 00:44:36.096920 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:36.096926 | orchestrator | Tuesday 10 March 2026 00:44:30 +0000 (0:00:00.206) 0:00:10.169 ********* 2026-03-10 00:44:36.096932 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.096938 | orchestrator | 2026-03-10 00:44:36.096945 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-10 00:44:36.096951 | orchestrator | Tuesday 10 March 2026 00:44:30 +0000 (0:00:00.248) 0:00:10.417 ********* 2026-03-10 00:44:36.096957 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-10 00:44:36.096963 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-10 00:44:36.096969 | orchestrator | 2026-03-10 00:44:36.096975 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-10 00:44:36.096982 | orchestrator | Tuesday 10 March 2026 00:44:30 +0000 (0:00:00.185) 0:00:10.603 ********* 2026-03-10 00:44:36.097004 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.097010 | orchestrator | 2026-03-10 00:44:36.097017 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-10 00:44:36.097023 | orchestrator | Tuesday 10 March 2026 00:44:30 +0000 (0:00:00.141) 0:00:10.744 ********* 2026-03-10 00:44:36.097029 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.097035 | orchestrator | 2026-03-10 00:44:36.097041 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-10 00:44:36.097047 | orchestrator | Tuesday 10 March 2026 00:44:30 +0000 (0:00:00.132) 0:00:10.877 ********* 2026-03-10 00:44:36.097053 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.097060 | orchestrator | 2026-03-10 00:44:36.097070 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-10 00:44:36.097081 | orchestrator | Tuesday 10 March 2026 00:44:31 +0000 (0:00:00.142) 0:00:11.019 ********* 2026-03-10 00:44:36.097091 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:36.097101 | orchestrator | 2026-03-10 00:44:36.097112 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-10 00:44:36.097122 | orchestrator | Tuesday 10 March 2026 00:44:31 +0000 (0:00:00.148) 0:00:11.168 ********* 2026-03-10 00:44:36.097134 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'}}) 2026-03-10 00:44:36.097146 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e5abf04-63a5-5f41-bb2b-61caa92fdc91'}}) 2026-03-10 00:44:36.097157 | orchestrator | 2026-03-10 00:44:36.097167 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-10 00:44:36.097174 | orchestrator | Tuesday 10 March 2026 00:44:31 +0000 (0:00:00.171) 0:00:11.340 ********* 2026-03-10 00:44:36.097182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'}})  2026-03-10 00:44:36.097196 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e5abf04-63a5-5f41-bb2b-61caa92fdc91'}})  2026-03-10 00:44:36.097207 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.097214 | orchestrator | 2026-03-10 00:44:36.097222 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-10 00:44:36.097229 | orchestrator | Tuesday 10 March 2026 00:44:31 +0000 (0:00:00.157) 0:00:11.497 ********* 2026-03-10 00:44:36.097236 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'}})  2026-03-10 00:44:36.097244 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e5abf04-63a5-5f41-bb2b-61caa92fdc91'}})  2026-03-10 00:44:36.097251 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.097258 | orchestrator | 2026-03-10 00:44:36.097265 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-10 00:44:36.097272 | orchestrator | Tuesday 10 March 2026 00:44:31 +0000 (0:00:00.384) 0:00:11.881 ********* 2026-03-10 00:44:36.097280 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'}})  2026-03-10 00:44:36.097308 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e5abf04-63a5-5f41-bb2b-61caa92fdc91'}})  2026-03-10 00:44:36.097318 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.097328 | orchestrator | 2026-03-10 00:44:36.097338 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-10 00:44:36.097346 | orchestrator | Tuesday 10 March 2026 00:44:32 +0000 (0:00:00.151) 0:00:12.033 ********* 2026-03-10 00:44:36.097356 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:36.097367 | orchestrator | 2026-03-10 00:44:36.097427 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-10 00:44:36.097438 | orchestrator | Tuesday 10 March 2026 00:44:32 +0000 (0:00:00.155) 0:00:12.188 ********* 2026-03-10 00:44:36.097450 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:44:36.097475 | orchestrator | 2026-03-10 00:44:36.097485 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-10 00:44:36.097496 | orchestrator | Tuesday 10 March 2026 00:44:32 +0000 (0:00:00.150) 0:00:12.339 ********* 2026-03-10 00:44:36.097506 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.097517 | orchestrator | 2026-03-10 00:44:36.097529 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-10 00:44:36.097540 | orchestrator | Tuesday 10 March 2026 00:44:32 +0000 (0:00:00.131) 0:00:12.470 ********* 2026-03-10 00:44:36.097552 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.097563 | orchestrator | 2026-03-10 00:44:36.097574 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-10 00:44:36.097581 | orchestrator | Tuesday 10 March 2026 00:44:32 +0000 (0:00:00.130) 0:00:12.600 ********* 2026-03-10 00:44:36.097587 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.097593 | orchestrator | 2026-03-10 00:44:36.097599 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-10 00:44:36.097605 | orchestrator | Tuesday 10 March 2026 00:44:32 +0000 (0:00:00.133) 0:00:12.734 ********* 2026-03-10 00:44:36.097611 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 00:44:36.097617 | orchestrator |  "ceph_osd_devices": { 2026-03-10 00:44:36.097623 | orchestrator |  "sdb": { 2026-03-10 00:44:36.097630 | orchestrator |  "osd_lvm_uuid": "c2da093f-67f0-5a54-a6a1-4e0ffcdb14df" 2026-03-10 00:44:36.097637 | orchestrator |  }, 2026-03-10 00:44:36.097643 | orchestrator |  "sdc": { 2026-03-10 00:44:36.097649 | orchestrator |  "osd_lvm_uuid": "1e5abf04-63a5-5f41-bb2b-61caa92fdc91" 2026-03-10 00:44:36.097655 | orchestrator |  } 2026-03-10 00:44:36.097661 | orchestrator |  } 2026-03-10 00:44:36.097667 | orchestrator | } 2026-03-10 00:44:36.097674 | orchestrator | 2026-03-10 00:44:36.097680 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-10 00:44:36.097686 | orchestrator | Tuesday 10 March 2026 00:44:32 +0000 (0:00:00.141) 0:00:12.875 ********* 2026-03-10 00:44:36.097692 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.097698 | orchestrator | 2026-03-10 00:44:36.097705 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-10 00:44:36.097711 | orchestrator | Tuesday 10 March 2026 00:44:33 +0000 (0:00:00.124) 0:00:13.000 ********* 2026-03-10 00:44:36.097717 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.097723 | orchestrator | 2026-03-10 00:44:36.097730 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-10 00:44:36.097736 | orchestrator | Tuesday 10 March 2026 00:44:33 +0000 (0:00:00.132) 0:00:13.132 ********* 2026-03-10 00:44:36.097742 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:44:36.097748 | orchestrator | 2026-03-10 00:44:36.097754 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-10 00:44:36.097760 | orchestrator | Tuesday 10 March 2026 00:44:33 +0000 (0:00:00.121) 0:00:13.254 ********* 2026-03-10 00:44:36.097767 | orchestrator | changed: [testbed-node-3] => { 2026-03-10 00:44:36.097773 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-10 00:44:36.097779 | orchestrator |  "ceph_osd_devices": { 2026-03-10 00:44:36.097785 | orchestrator |  "sdb": { 2026-03-10 00:44:36.097791 | orchestrator |  "osd_lvm_uuid": "c2da093f-67f0-5a54-a6a1-4e0ffcdb14df" 2026-03-10 00:44:36.097798 | orchestrator |  }, 2026-03-10 00:44:36.097804 | orchestrator |  "sdc": { 2026-03-10 00:44:36.097810 | orchestrator |  "osd_lvm_uuid": "1e5abf04-63a5-5f41-bb2b-61caa92fdc91" 2026-03-10 00:44:36.097816 | orchestrator |  } 2026-03-10 00:44:36.097822 | orchestrator |  }, 2026-03-10 00:44:36.097828 | orchestrator |  "lvm_volumes": [ 2026-03-10 00:44:36.097835 | orchestrator |  { 2026-03-10 00:44:36.097841 | orchestrator |  "data": "osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df", 2026-03-10 00:44:36.097847 | orchestrator |  "data_vg": "ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df" 2026-03-10 00:44:36.097858 | orchestrator |  }, 2026-03-10 00:44:36.097865 | orchestrator |  { 2026-03-10 00:44:36.097871 | orchestrator |  "data": "osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91", 2026-03-10 00:44:36.097877 | orchestrator |  "data_vg": "ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91" 2026-03-10 00:44:36.097883 | orchestrator |  } 2026-03-10 00:44:36.097890 | orchestrator |  ] 2026-03-10 00:44:36.097896 | orchestrator |  } 2026-03-10 00:44:36.097902 | orchestrator | } 2026-03-10 00:44:36.097908 | orchestrator | 2026-03-10 00:44:36.097914 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-10 00:44:36.097921 | orchestrator | Tuesday 10 March 2026 00:44:33 +0000 (0:00:00.475) 0:00:13.730 ********* 2026-03-10 00:44:36.097927 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-10 00:44:36.097933 | orchestrator | 2026-03-10 00:44:36.097939 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-10 00:44:36.097945 | orchestrator | 2026-03-10 00:44:36.097951 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-10 00:44:36.097958 | orchestrator | Tuesday 10 March 2026 00:44:35 +0000 (0:00:01.744) 0:00:15.474 ********* 2026-03-10 00:44:36.097964 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-10 00:44:36.097970 | orchestrator | 2026-03-10 00:44:36.097976 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-10 00:44:36.097982 | orchestrator | Tuesday 10 March 2026 00:44:35 +0000 (0:00:00.262) 0:00:15.736 ********* 2026-03-10 00:44:36.097989 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:36.097995 | orchestrator | 2026-03-10 00:44:36.098009 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.957262 | orchestrator | Tuesday 10 March 2026 00:44:36 +0000 (0:00:00.258) 0:00:15.995 ********* 2026-03-10 00:44:44.957420 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-10 00:44:44.957439 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-10 00:44:44.957450 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-10 00:44:44.957462 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-10 00:44:44.957473 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-10 00:44:44.957484 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-10 00:44:44.957495 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-10 00:44:44.957510 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-10 00:44:44.957521 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-10 00:44:44.957533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-10 00:44:44.957544 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-10 00:44:44.957555 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-10 00:44:44.957585 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-10 00:44:44.957597 | orchestrator | 2026-03-10 00:44:44.957609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.957619 | orchestrator | Tuesday 10 March 2026 00:44:36 +0000 (0:00:00.375) 0:00:16.370 ********* 2026-03-10 00:44:44.957630 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.957642 | orchestrator | 2026-03-10 00:44:44.957653 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.957664 | orchestrator | Tuesday 10 March 2026 00:44:36 +0000 (0:00:00.193) 0:00:16.564 ********* 2026-03-10 00:44:44.957696 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.957708 | orchestrator | 2026-03-10 00:44:44.957719 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.957730 | orchestrator | Tuesday 10 March 2026 00:44:36 +0000 (0:00:00.215) 0:00:16.780 ********* 2026-03-10 00:44:44.957740 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.957751 | orchestrator | 2026-03-10 00:44:44.957762 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.957775 | orchestrator | Tuesday 10 March 2026 00:44:37 +0000 (0:00:00.215) 0:00:16.996 ********* 2026-03-10 00:44:44.957789 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.957801 | orchestrator | 2026-03-10 00:44:44.957813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.957826 | orchestrator | Tuesday 10 March 2026 00:44:37 +0000 (0:00:00.196) 0:00:17.192 ********* 2026-03-10 00:44:44.957838 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.957851 | orchestrator | 2026-03-10 00:44:44.957863 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.957875 | orchestrator | Tuesday 10 March 2026 00:44:38 +0000 (0:00:00.825) 0:00:18.018 ********* 2026-03-10 00:44:44.957887 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.957901 | orchestrator | 2026-03-10 00:44:44.957913 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.957925 | orchestrator | Tuesday 10 March 2026 00:44:38 +0000 (0:00:00.215) 0:00:18.233 ********* 2026-03-10 00:44:44.957938 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.957950 | orchestrator | 2026-03-10 00:44:44.957962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.957975 | orchestrator | Tuesday 10 March 2026 00:44:38 +0000 (0:00:00.277) 0:00:18.511 ********* 2026-03-10 00:44:44.957987 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.957999 | orchestrator | 2026-03-10 00:44:44.958011 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.958088 | orchestrator | Tuesday 10 March 2026 00:44:38 +0000 (0:00:00.201) 0:00:18.712 ********* 2026-03-10 00:44:44.958102 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d) 2026-03-10 00:44:44.958115 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d) 2026-03-10 00:44:44.958128 | orchestrator | 2026-03-10 00:44:44.958139 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.958150 | orchestrator | Tuesday 10 March 2026 00:44:39 +0000 (0:00:00.459) 0:00:19.172 ********* 2026-03-10 00:44:44.958160 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b94fdc5f-2b9b-46a8-a60f-74e41f269a0d) 2026-03-10 00:44:44.958171 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b94fdc5f-2b9b-46a8-a60f-74e41f269a0d) 2026-03-10 00:44:44.958182 | orchestrator | 2026-03-10 00:44:44.958193 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.958204 | orchestrator | Tuesday 10 March 2026 00:44:39 +0000 (0:00:00.464) 0:00:19.637 ********* 2026-03-10 00:44:44.958214 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_32f512e5-1c04-4680-91d7-4268581c2350) 2026-03-10 00:44:44.958225 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_32f512e5-1c04-4680-91d7-4268581c2350) 2026-03-10 00:44:44.958236 | orchestrator | 2026-03-10 00:44:44.958247 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.958277 | orchestrator | Tuesday 10 March 2026 00:44:40 +0000 (0:00:00.468) 0:00:20.105 ********* 2026-03-10 00:44:44.958288 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_21ab9d1e-083b-4748-865b-4e7341aec385) 2026-03-10 00:44:44.958299 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_21ab9d1e-083b-4748-865b-4e7341aec385) 2026-03-10 00:44:44.958310 | orchestrator | 2026-03-10 00:44:44.958329 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:44.958340 | orchestrator | Tuesday 10 March 2026 00:44:40 +0000 (0:00:00.440) 0:00:20.545 ********* 2026-03-10 00:44:44.958351 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-10 00:44:44.958362 | orchestrator | 2026-03-10 00:44:44.958427 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:44.958441 | orchestrator | Tuesday 10 March 2026 00:44:41 +0000 (0:00:00.358) 0:00:20.904 ********* 2026-03-10 00:44:44.958452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-10 00:44:44.958462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-10 00:44:44.958493 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-10 00:44:44.958505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-10 00:44:44.958515 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-10 00:44:44.958526 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-10 00:44:44.958536 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-10 00:44:44.958547 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-10 00:44:44.958557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-10 00:44:44.958567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-10 00:44:44.958578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-10 00:44:44.958588 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-10 00:44:44.958599 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-10 00:44:44.958609 | orchestrator | 2026-03-10 00:44:44.958620 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:44.958630 | orchestrator | Tuesday 10 March 2026 00:44:41 +0000 (0:00:00.445) 0:00:21.349 ********* 2026-03-10 00:44:44.958641 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.958651 | orchestrator | 2026-03-10 00:44:44.958662 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:44.958672 | orchestrator | Tuesday 10 March 2026 00:44:42 +0000 (0:00:00.728) 0:00:22.078 ********* 2026-03-10 00:44:44.958684 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.958704 | orchestrator | 2026-03-10 00:44:44.958724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:44.958742 | orchestrator | Tuesday 10 March 2026 00:44:42 +0000 (0:00:00.228) 0:00:22.306 ********* 2026-03-10 00:44:44.958760 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.958779 | orchestrator | 2026-03-10 00:44:44.958809 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:44.958828 | orchestrator | Tuesday 10 March 2026 00:44:42 +0000 (0:00:00.219) 0:00:22.526 ********* 2026-03-10 00:44:44.958847 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.958867 | orchestrator | 2026-03-10 00:44:44.958887 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:44.958906 | orchestrator | Tuesday 10 March 2026 00:44:42 +0000 (0:00:00.187) 0:00:22.714 ********* 2026-03-10 00:44:44.958924 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.958935 | orchestrator | 2026-03-10 00:44:44.958945 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:44.958956 | orchestrator | Tuesday 10 March 2026 00:44:43 +0000 (0:00:00.223) 0:00:22.937 ********* 2026-03-10 00:44:44.958967 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.958989 | orchestrator | 2026-03-10 00:44:44.959000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:44.959010 | orchestrator | Tuesday 10 March 2026 00:44:43 +0000 (0:00:00.233) 0:00:23.171 ********* 2026-03-10 00:44:44.959021 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.959032 | orchestrator | 2026-03-10 00:44:44.959042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:44.959053 | orchestrator | Tuesday 10 March 2026 00:44:43 +0000 (0:00:00.252) 0:00:23.424 ********* 2026-03-10 00:44:44.959063 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:44.959074 | orchestrator | 2026-03-10 00:44:44.959085 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:44.959096 | orchestrator | Tuesday 10 March 2026 00:44:43 +0000 (0:00:00.220) 0:00:23.645 ********* 2026-03-10 00:44:44.959106 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-10 00:44:44.959118 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-10 00:44:44.959129 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-10 00:44:44.959139 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-10 00:44:44.959150 | orchestrator | 2026-03-10 00:44:44.959161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:44.959171 | orchestrator | Tuesday 10 March 2026 00:44:44 +0000 (0:00:01.063) 0:00:24.708 ********* 2026-03-10 00:44:44.959182 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.453541 | orchestrator | 2026-03-10 00:44:52.453663 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:52.453680 | orchestrator | Tuesday 10 March 2026 00:44:45 +0000 (0:00:00.226) 0:00:24.935 ********* 2026-03-10 00:44:52.453691 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.453713 | orchestrator | 2026-03-10 00:44:52.453725 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:52.453735 | orchestrator | Tuesday 10 March 2026 00:44:45 +0000 (0:00:00.230) 0:00:25.165 ********* 2026-03-10 00:44:52.453745 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.453755 | orchestrator | 2026-03-10 00:44:52.453765 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:44:52.453774 | orchestrator | Tuesday 10 March 2026 00:44:45 +0000 (0:00:00.218) 0:00:25.384 ********* 2026-03-10 00:44:52.453784 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.453793 | orchestrator | 2026-03-10 00:44:52.453803 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-10 00:44:52.453813 | orchestrator | Tuesday 10 March 2026 00:44:46 +0000 (0:00:00.772) 0:00:26.156 ********* 2026-03-10 00:44:52.453822 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-10 00:44:52.453832 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-10 00:44:52.453842 | orchestrator | 2026-03-10 00:44:52.453852 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-10 00:44:52.453878 | orchestrator | Tuesday 10 March 2026 00:44:46 +0000 (0:00:00.267) 0:00:26.423 ********* 2026-03-10 00:44:52.453889 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.453899 | orchestrator | 2026-03-10 00:44:52.453908 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-10 00:44:52.453918 | orchestrator | Tuesday 10 March 2026 00:44:46 +0000 (0:00:00.181) 0:00:26.605 ********* 2026-03-10 00:44:52.453928 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.453937 | orchestrator | 2026-03-10 00:44:52.453947 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-10 00:44:52.453960 | orchestrator | Tuesday 10 March 2026 00:44:46 +0000 (0:00:00.156) 0:00:26.762 ********* 2026-03-10 00:44:52.453970 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.453980 | orchestrator | 2026-03-10 00:44:52.454006 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-10 00:44:52.454064 | orchestrator | Tuesday 10 March 2026 00:44:46 +0000 (0:00:00.131) 0:00:26.894 ********* 2026-03-10 00:44:52.454099 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:52.454111 | orchestrator | 2026-03-10 00:44:52.454121 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-10 00:44:52.454130 | orchestrator | Tuesday 10 March 2026 00:44:47 +0000 (0:00:00.133) 0:00:27.027 ********* 2026-03-10 00:44:52.454140 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'}}) 2026-03-10 00:44:52.454150 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a55caf6-84ae-542a-a466-02d3e6c6095e'}}) 2026-03-10 00:44:52.454160 | orchestrator | 2026-03-10 00:44:52.454169 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-10 00:44:52.454179 | orchestrator | Tuesday 10 March 2026 00:44:47 +0000 (0:00:00.185) 0:00:27.212 ********* 2026-03-10 00:44:52.454189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'}})  2026-03-10 00:44:52.454200 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a55caf6-84ae-542a-a466-02d3e6c6095e'}})  2026-03-10 00:44:52.454210 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.454220 | orchestrator | 2026-03-10 00:44:52.454229 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-10 00:44:52.454239 | orchestrator | Tuesday 10 March 2026 00:44:47 +0000 (0:00:00.156) 0:00:27.369 ********* 2026-03-10 00:44:52.454248 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'}})  2026-03-10 00:44:52.454258 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a55caf6-84ae-542a-a466-02d3e6c6095e'}})  2026-03-10 00:44:52.454268 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.454278 | orchestrator | 2026-03-10 00:44:52.454287 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-10 00:44:52.454297 | orchestrator | Tuesday 10 March 2026 00:44:47 +0000 (0:00:00.160) 0:00:27.530 ********* 2026-03-10 00:44:52.454307 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'}})  2026-03-10 00:44:52.454316 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a55caf6-84ae-542a-a466-02d3e6c6095e'}})  2026-03-10 00:44:52.454326 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.454335 | orchestrator | 2026-03-10 00:44:52.454345 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-10 00:44:52.454355 | orchestrator | Tuesday 10 March 2026 00:44:47 +0000 (0:00:00.192) 0:00:27.722 ********* 2026-03-10 00:44:52.454384 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:52.454395 | orchestrator | 2026-03-10 00:44:52.454404 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-10 00:44:52.454414 | orchestrator | Tuesday 10 March 2026 00:44:47 +0000 (0:00:00.167) 0:00:27.890 ********* 2026-03-10 00:44:52.454424 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:44:52.454433 | orchestrator | 2026-03-10 00:44:52.454443 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-10 00:44:52.454453 | orchestrator | Tuesday 10 March 2026 00:44:48 +0000 (0:00:00.155) 0:00:28.045 ********* 2026-03-10 00:44:52.454480 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.454491 | orchestrator | 2026-03-10 00:44:52.454500 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-10 00:44:52.454510 | orchestrator | Tuesday 10 March 2026 00:44:48 +0000 (0:00:00.393) 0:00:28.438 ********* 2026-03-10 00:44:52.454520 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.454530 | orchestrator | 2026-03-10 00:44:52.454539 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-10 00:44:52.454549 | orchestrator | Tuesday 10 March 2026 00:44:48 +0000 (0:00:00.172) 0:00:28.611 ********* 2026-03-10 00:44:52.454559 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.454578 | orchestrator | 2026-03-10 00:44:52.454588 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-10 00:44:52.454597 | orchestrator | Tuesday 10 March 2026 00:44:48 +0000 (0:00:00.183) 0:00:28.795 ********* 2026-03-10 00:44:52.454607 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 00:44:52.454616 | orchestrator |  "ceph_osd_devices": { 2026-03-10 00:44:52.454626 | orchestrator |  "sdb": { 2026-03-10 00:44:52.454636 | orchestrator |  "osd_lvm_uuid": "c7cdfd74-cae8-56d1-a0f9-4438e0fe684e" 2026-03-10 00:44:52.454645 | orchestrator |  }, 2026-03-10 00:44:52.454655 | orchestrator |  "sdc": { 2026-03-10 00:44:52.454665 | orchestrator |  "osd_lvm_uuid": "5a55caf6-84ae-542a-a466-02d3e6c6095e" 2026-03-10 00:44:52.454674 | orchestrator |  } 2026-03-10 00:44:52.454684 | orchestrator |  } 2026-03-10 00:44:52.454693 | orchestrator | } 2026-03-10 00:44:52.454703 | orchestrator | 2026-03-10 00:44:52.454713 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-10 00:44:52.454722 | orchestrator | Tuesday 10 March 2026 00:44:49 +0000 (0:00:00.223) 0:00:29.019 ********* 2026-03-10 00:44:52.454732 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.454742 | orchestrator | 2026-03-10 00:44:52.454751 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-10 00:44:52.454760 | orchestrator | Tuesday 10 March 2026 00:44:49 +0000 (0:00:00.185) 0:00:29.204 ********* 2026-03-10 00:44:52.454770 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.454779 | orchestrator | 2026-03-10 00:44:52.454789 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-10 00:44:52.454799 | orchestrator | Tuesday 10 March 2026 00:44:49 +0000 (0:00:00.170) 0:00:29.374 ********* 2026-03-10 00:44:52.454808 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:44:52.454818 | orchestrator | 2026-03-10 00:44:52.454827 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-10 00:44:52.454843 | orchestrator | Tuesday 10 March 2026 00:44:49 +0000 (0:00:00.171) 0:00:29.546 ********* 2026-03-10 00:44:52.454853 | orchestrator | changed: [testbed-node-4] => { 2026-03-10 00:44:52.454863 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-10 00:44:52.454872 | orchestrator |  "ceph_osd_devices": { 2026-03-10 00:44:52.454882 | orchestrator |  "sdb": { 2026-03-10 00:44:52.454891 | orchestrator |  "osd_lvm_uuid": "c7cdfd74-cae8-56d1-a0f9-4438e0fe684e" 2026-03-10 00:44:52.454901 | orchestrator |  }, 2026-03-10 00:44:52.454910 | orchestrator |  "sdc": { 2026-03-10 00:44:52.454920 | orchestrator |  "osd_lvm_uuid": "5a55caf6-84ae-542a-a466-02d3e6c6095e" 2026-03-10 00:44:52.454929 | orchestrator |  } 2026-03-10 00:44:52.454939 | orchestrator |  }, 2026-03-10 00:44:52.454948 | orchestrator |  "lvm_volumes": [ 2026-03-10 00:44:52.454958 | orchestrator |  { 2026-03-10 00:44:52.454967 | orchestrator |  "data": "osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e", 2026-03-10 00:44:52.454977 | orchestrator |  "data_vg": "ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e" 2026-03-10 00:44:52.454986 | orchestrator |  }, 2026-03-10 00:44:52.454996 | orchestrator |  { 2026-03-10 00:44:52.455005 | orchestrator |  "data": "osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e", 2026-03-10 00:44:52.455015 | orchestrator |  "data_vg": "ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e" 2026-03-10 00:44:52.455024 | orchestrator |  } 2026-03-10 00:44:52.455034 | orchestrator |  ] 2026-03-10 00:44:52.455043 | orchestrator |  } 2026-03-10 00:44:52.455053 | orchestrator | } 2026-03-10 00:44:52.455062 | orchestrator | 2026-03-10 00:44:52.455072 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-10 00:44:52.455081 | orchestrator | Tuesday 10 March 2026 00:44:49 +0000 (0:00:00.251) 0:00:29.798 ********* 2026-03-10 00:44:52.455091 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-10 00:44:52.455100 | orchestrator | 2026-03-10 00:44:52.455116 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-10 00:44:52.455125 | orchestrator | 2026-03-10 00:44:52.455135 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-10 00:44:52.455144 | orchestrator | Tuesday 10 March 2026 00:44:51 +0000 (0:00:01.213) 0:00:31.011 ********* 2026-03-10 00:44:52.455154 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-10 00:44:52.455163 | orchestrator | 2026-03-10 00:44:52.455173 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-10 00:44:52.455183 | orchestrator | Tuesday 10 March 2026 00:44:51 +0000 (0:00:00.795) 0:00:31.807 ********* 2026-03-10 00:44:52.455192 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:44:52.455202 | orchestrator | 2026-03-10 00:44:52.455211 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:44:52.455220 | orchestrator | Tuesday 10 March 2026 00:44:52 +0000 (0:00:00.253) 0:00:32.061 ********* 2026-03-10 00:44:52.455230 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-10 00:44:52.455239 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-10 00:44:52.455249 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-10 00:44:52.455259 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-10 00:44:52.455268 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-10 00:44:52.455284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-10 00:45:00.354816 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-10 00:45:00.354919 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-10 00:45:00.354934 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-10 00:45:00.354946 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-10 00:45:00.354957 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-10 00:45:00.354967 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-10 00:45:00.354978 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-10 00:45:00.354989 | orchestrator | 2026-03-10 00:45:00.355001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:45:00.355012 | orchestrator | Tuesday 10 March 2026 00:44:52 +0000 (0:00:00.353) 0:00:32.414 ********* 2026-03-10 00:45:00.355023 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.355035 | orchestrator | 2026-03-10 00:45:00.355046 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:45:00.355057 | orchestrator | Tuesday 10 March 2026 00:44:52 +0000 (0:00:00.141) 0:00:32.556 ********* 2026-03-10 00:45:00.355067 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.355078 | orchestrator | 2026-03-10 00:45:00.355089 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:45:00.355099 | orchestrator | Tuesday 10 March 2026 00:44:52 +0000 (0:00:00.140) 0:00:32.696 ********* 2026-03-10 00:45:00.355110 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.355121 | orchestrator | 2026-03-10 00:45:00.355131 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:45:00.355142 | orchestrator | Tuesday 10 March 2026 00:44:52 +0000 (0:00:00.142) 0:00:32.838 ********* 2026-03-10 00:45:00.355153 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.355163 | orchestrator | 2026-03-10 00:45:00.355174 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:45:00.355185 | orchestrator | Tuesday 10 March 2026 00:44:53 +0000 (0:00:00.135) 0:00:32.974 ********* 2026-03-10 00:45:00.355219 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.355231 | orchestrator | 2026-03-10 00:45:00.355241 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:45:00.355252 | orchestrator | Tuesday 10 March 2026 00:44:53 +0000 (0:00:00.143) 0:00:33.118 ********* 2026-03-10 00:45:00.355263 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.355273 | orchestrator | 2026-03-10 00:45:00.355284 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:45:00.355294 | orchestrator | Tuesday 10 March 2026 00:44:53 +0000 (0:00:00.145) 0:00:33.263 ********* 2026-03-10 00:45:00.355305 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.355315 | orchestrator | 2026-03-10 00:45:00.355326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:45:00.355337 | orchestrator | Tuesday 10 March 2026 00:44:53 +0000 (0:00:00.188) 0:00:33.452 ********* 2026-03-10 00:45:00.355347 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.355387 | orchestrator | 2026-03-10 00:45:00.355400 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:45:00.355413 | orchestrator | Tuesday 10 March 2026 00:44:53 +0000 (0:00:00.168) 0:00:33.621 ********* 2026-03-10 00:45:00.355425 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b) 2026-03-10 00:45:00.355439 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b) 2026-03-10 00:45:00.355451 | orchestrator | 2026-03-10 00:45:00.355463 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:45:00.355476 | orchestrator | Tuesday 10 March 2026 00:44:54 +0000 (0:00:00.705) 0:00:34.326 ********* 2026-03-10 00:45:00.355506 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_525599b5-6362-4aac-a0b3-94bd4cb39972) 2026-03-10 00:45:00.355520 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_525599b5-6362-4aac-a0b3-94bd4cb39972) 2026-03-10 00:45:00.355532 | orchestrator | 2026-03-10 00:45:00.355545 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:45:00.355557 | orchestrator | Tuesday 10 March 2026 00:44:54 +0000 (0:00:00.421) 0:00:34.748 ********* 2026-03-10 00:45:00.355570 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_885a647d-e739-4ea9-ae01-9c2ce04d6822) 2026-03-10 00:45:00.355583 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_885a647d-e739-4ea9-ae01-9c2ce04d6822) 2026-03-10 00:45:00.355595 | orchestrator | 2026-03-10 00:45:00.355607 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:45:00.355619 | orchestrator | Tuesday 10 March 2026 00:44:55 +0000 (0:00:00.427) 0:00:35.175 ********* 2026-03-10 00:45:00.355631 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3e39970d-8644-42a9-a13b-932f32b0237f) 2026-03-10 00:45:00.355644 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3e39970d-8644-42a9-a13b-932f32b0237f) 2026-03-10 00:45:00.355656 | orchestrator | 2026-03-10 00:45:00.355669 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:45:00.355682 | orchestrator | Tuesday 10 March 2026 00:44:55 +0000 (0:00:00.426) 0:00:35.602 ********* 2026-03-10 00:45:00.355694 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-10 00:45:00.355706 | orchestrator | 2026-03-10 00:45:00.355719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.355747 | orchestrator | Tuesday 10 March 2026 00:44:56 +0000 (0:00:00.339) 0:00:35.942 ********* 2026-03-10 00:45:00.355760 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-10 00:45:00.355770 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-10 00:45:00.355782 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-10 00:45:00.355792 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-10 00:45:00.355810 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-10 00:45:00.355821 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-10 00:45:00.355831 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-10 00:45:00.355842 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-10 00:45:00.355852 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-10 00:45:00.355863 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-10 00:45:00.355873 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-10 00:45:00.355884 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-10 00:45:00.355894 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-10 00:45:00.355905 | orchestrator | 2026-03-10 00:45:00.355915 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.355926 | orchestrator | Tuesday 10 March 2026 00:44:56 +0000 (0:00:00.452) 0:00:36.395 ********* 2026-03-10 00:45:00.355937 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.355947 | orchestrator | 2026-03-10 00:45:00.355958 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.355968 | orchestrator | Tuesday 10 March 2026 00:44:56 +0000 (0:00:00.212) 0:00:36.607 ********* 2026-03-10 00:45:00.355979 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.355989 | orchestrator | 2026-03-10 00:45:00.356000 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.356010 | orchestrator | Tuesday 10 March 2026 00:44:56 +0000 (0:00:00.211) 0:00:36.819 ********* 2026-03-10 00:45:00.356021 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.356031 | orchestrator | 2026-03-10 00:45:00.356042 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.356053 | orchestrator | Tuesday 10 March 2026 00:44:57 +0000 (0:00:00.222) 0:00:37.042 ********* 2026-03-10 00:45:00.356063 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.356074 | orchestrator | 2026-03-10 00:45:00.356084 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.356095 | orchestrator | Tuesday 10 March 2026 00:44:57 +0000 (0:00:00.204) 0:00:37.246 ********* 2026-03-10 00:45:00.356105 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.356116 | orchestrator | 2026-03-10 00:45:00.356126 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.356137 | orchestrator | Tuesday 10 March 2026 00:44:57 +0000 (0:00:00.223) 0:00:37.469 ********* 2026-03-10 00:45:00.356148 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.356158 | orchestrator | 2026-03-10 00:45:00.356169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.356179 | orchestrator | Tuesday 10 March 2026 00:44:58 +0000 (0:00:00.723) 0:00:38.193 ********* 2026-03-10 00:45:00.356190 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.356200 | orchestrator | 2026-03-10 00:45:00.356211 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.356222 | orchestrator | Tuesday 10 March 2026 00:44:58 +0000 (0:00:00.264) 0:00:38.457 ********* 2026-03-10 00:45:00.356232 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.356243 | orchestrator | 2026-03-10 00:45:00.356253 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.356264 | orchestrator | Tuesday 10 March 2026 00:44:58 +0000 (0:00:00.248) 0:00:38.706 ********* 2026-03-10 00:45:00.356275 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-10 00:45:00.356292 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-10 00:45:00.356303 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-10 00:45:00.356314 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-10 00:45:00.356324 | orchestrator | 2026-03-10 00:45:00.356335 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.356345 | orchestrator | Tuesday 10 March 2026 00:44:59 +0000 (0:00:00.700) 0:00:39.406 ********* 2026-03-10 00:45:00.356356 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.356383 | orchestrator | 2026-03-10 00:45:00.356395 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.356406 | orchestrator | Tuesday 10 March 2026 00:44:59 +0000 (0:00:00.206) 0:00:39.613 ********* 2026-03-10 00:45:00.356416 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.356427 | orchestrator | 2026-03-10 00:45:00.356437 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.356448 | orchestrator | Tuesday 10 March 2026 00:44:59 +0000 (0:00:00.223) 0:00:39.837 ********* 2026-03-10 00:45:00.356459 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.356469 | orchestrator | 2026-03-10 00:45:00.356480 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:45:00.356491 | orchestrator | Tuesday 10 March 2026 00:45:00 +0000 (0:00:00.205) 0:00:40.042 ********* 2026-03-10 00:45:00.356502 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:00.356512 | orchestrator | 2026-03-10 00:45:00.356529 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-10 00:45:04.861083 | orchestrator | Tuesday 10 March 2026 00:45:00 +0000 (0:00:00.207) 0:00:40.249 ********* 2026-03-10 00:45:04.861207 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-10 00:45:04.861222 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-10 00:45:04.861232 | orchestrator | 2026-03-10 00:45:04.861242 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-10 00:45:04.861253 | orchestrator | Tuesday 10 March 2026 00:45:00 +0000 (0:00:00.181) 0:00:40.431 ********* 2026-03-10 00:45:04.861263 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:04.861272 | orchestrator | 2026-03-10 00:45:04.861282 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-10 00:45:04.861292 | orchestrator | Tuesday 10 March 2026 00:45:00 +0000 (0:00:00.155) 0:00:40.587 ********* 2026-03-10 00:45:04.861320 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:04.861331 | orchestrator | 2026-03-10 00:45:04.861340 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-10 00:45:04.861350 | orchestrator | Tuesday 10 March 2026 00:45:00 +0000 (0:00:00.146) 0:00:40.733 ********* 2026-03-10 00:45:04.861414 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:04.861427 | orchestrator | 2026-03-10 00:45:04.861437 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-10 00:45:04.861447 | orchestrator | Tuesday 10 March 2026 00:45:01 +0000 (0:00:00.377) 0:00:41.110 ********* 2026-03-10 00:45:04.861457 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:45:04.861467 | orchestrator | 2026-03-10 00:45:04.861477 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-10 00:45:04.861486 | orchestrator | Tuesday 10 March 2026 00:45:01 +0000 (0:00:00.145) 0:00:41.256 ********* 2026-03-10 00:45:04.861496 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '276dc5cf-0fff-57f4-b280-c3cda8556bee'}}) 2026-03-10 00:45:04.861511 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1c4f45a1-f837-5281-b6b5-75662d68eedd'}}) 2026-03-10 00:45:04.861521 | orchestrator | 2026-03-10 00:45:04.861530 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-10 00:45:04.861540 | orchestrator | Tuesday 10 March 2026 00:45:01 +0000 (0:00:00.167) 0:00:41.423 ********* 2026-03-10 00:45:04.861550 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '276dc5cf-0fff-57f4-b280-c3cda8556bee'}})  2026-03-10 00:45:04.861624 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1c4f45a1-f837-5281-b6b5-75662d68eedd'}})  2026-03-10 00:45:04.861638 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:04.861650 | orchestrator | 2026-03-10 00:45:04.861661 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-10 00:45:04.861672 | orchestrator | Tuesday 10 March 2026 00:45:01 +0000 (0:00:00.152) 0:00:41.576 ********* 2026-03-10 00:45:04.861683 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '276dc5cf-0fff-57f4-b280-c3cda8556bee'}})  2026-03-10 00:45:04.861699 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1c4f45a1-f837-5281-b6b5-75662d68eedd'}})  2026-03-10 00:45:04.861714 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:04.861737 | orchestrator | 2026-03-10 00:45:04.861761 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-10 00:45:04.861776 | orchestrator | Tuesday 10 March 2026 00:45:01 +0000 (0:00:00.189) 0:00:41.765 ********* 2026-03-10 00:45:04.861792 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '276dc5cf-0fff-57f4-b280-c3cda8556bee'}})  2026-03-10 00:45:04.861807 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1c4f45a1-f837-5281-b6b5-75662d68eedd'}})  2026-03-10 00:45:04.861824 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:04.861839 | orchestrator | 2026-03-10 00:45:04.861853 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-10 00:45:04.861869 | orchestrator | Tuesday 10 March 2026 00:45:02 +0000 (0:00:00.155) 0:00:41.920 ********* 2026-03-10 00:45:04.861886 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:45:04.861901 | orchestrator | 2026-03-10 00:45:04.861917 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-10 00:45:04.861932 | orchestrator | Tuesday 10 March 2026 00:45:02 +0000 (0:00:00.140) 0:00:42.061 ********* 2026-03-10 00:45:04.861947 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:45:04.861963 | orchestrator | 2026-03-10 00:45:04.861979 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-10 00:45:04.861995 | orchestrator | Tuesday 10 March 2026 00:45:02 +0000 (0:00:00.152) 0:00:42.214 ********* 2026-03-10 00:45:04.862012 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:04.862105 | orchestrator | 2026-03-10 00:45:04.862160 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-10 00:45:04.862180 | orchestrator | Tuesday 10 March 2026 00:45:02 +0000 (0:00:00.147) 0:00:42.361 ********* 2026-03-10 00:45:04.862196 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:04.862212 | orchestrator | 2026-03-10 00:45:04.862229 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-10 00:45:04.862247 | orchestrator | Tuesday 10 March 2026 00:45:02 +0000 (0:00:00.150) 0:00:42.512 ********* 2026-03-10 00:45:04.862265 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:04.862280 | orchestrator | 2026-03-10 00:45:04.862298 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-10 00:45:04.862315 | orchestrator | Tuesday 10 March 2026 00:45:02 +0000 (0:00:00.145) 0:00:42.658 ********* 2026-03-10 00:45:04.862332 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 00:45:04.862349 | orchestrator |  "ceph_osd_devices": { 2026-03-10 00:45:04.862393 | orchestrator |  "sdb": { 2026-03-10 00:45:04.862434 | orchestrator |  "osd_lvm_uuid": "276dc5cf-0fff-57f4-b280-c3cda8556bee" 2026-03-10 00:45:04.862451 | orchestrator |  }, 2026-03-10 00:45:04.862467 | orchestrator |  "sdc": { 2026-03-10 00:45:04.862482 | orchestrator |  "osd_lvm_uuid": "1c4f45a1-f837-5281-b6b5-75662d68eedd" 2026-03-10 00:45:04.862498 | orchestrator |  } 2026-03-10 00:45:04.862513 | orchestrator |  } 2026-03-10 00:45:04.862529 | orchestrator | } 2026-03-10 00:45:04.862545 | orchestrator | 2026-03-10 00:45:04.862577 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-10 00:45:04.862592 | orchestrator | Tuesday 10 March 2026 00:45:02 +0000 (0:00:00.160) 0:00:42.819 ********* 2026-03-10 00:45:04.862606 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:04.862622 | orchestrator | 2026-03-10 00:45:04.862636 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-10 00:45:04.862650 | orchestrator | Tuesday 10 March 2026 00:45:03 +0000 (0:00:00.379) 0:00:43.198 ********* 2026-03-10 00:45:04.862666 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:04.862680 | orchestrator | 2026-03-10 00:45:04.862695 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-10 00:45:04.862711 | orchestrator | Tuesday 10 March 2026 00:45:03 +0000 (0:00:00.150) 0:00:43.349 ********* 2026-03-10 00:45:04.862727 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:45:04.862742 | orchestrator | 2026-03-10 00:45:04.862756 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-10 00:45:04.862772 | orchestrator | Tuesday 10 March 2026 00:45:03 +0000 (0:00:00.143) 0:00:43.492 ********* 2026-03-10 00:45:04.862787 | orchestrator | changed: [testbed-node-5] => { 2026-03-10 00:45:04.862803 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-10 00:45:04.862819 | orchestrator |  "ceph_osd_devices": { 2026-03-10 00:45:04.862836 | orchestrator |  "sdb": { 2026-03-10 00:45:04.862853 | orchestrator |  "osd_lvm_uuid": "276dc5cf-0fff-57f4-b280-c3cda8556bee" 2026-03-10 00:45:04.862869 | orchestrator |  }, 2026-03-10 00:45:04.862884 | orchestrator |  "sdc": { 2026-03-10 00:45:04.862901 | orchestrator |  "osd_lvm_uuid": "1c4f45a1-f837-5281-b6b5-75662d68eedd" 2026-03-10 00:45:04.862918 | orchestrator |  } 2026-03-10 00:45:04.862934 | orchestrator |  }, 2026-03-10 00:45:04.862951 | orchestrator |  "lvm_volumes": [ 2026-03-10 00:45:04.862967 | orchestrator |  { 2026-03-10 00:45:04.862985 | orchestrator |  "data": "osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee", 2026-03-10 00:45:04.863001 | orchestrator |  "data_vg": "ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee" 2026-03-10 00:45:04.863018 | orchestrator |  }, 2026-03-10 00:45:04.863036 | orchestrator |  { 2026-03-10 00:45:04.863046 | orchestrator |  "data": "osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd", 2026-03-10 00:45:04.863056 | orchestrator |  "data_vg": "ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd" 2026-03-10 00:45:04.863065 | orchestrator |  } 2026-03-10 00:45:04.863075 | orchestrator |  ] 2026-03-10 00:45:04.863084 | orchestrator |  } 2026-03-10 00:45:04.863094 | orchestrator | } 2026-03-10 00:45:04.863103 | orchestrator | 2026-03-10 00:45:04.863113 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-10 00:45:04.863123 | orchestrator | Tuesday 10 March 2026 00:45:03 +0000 (0:00:00.208) 0:00:43.701 ********* 2026-03-10 00:45:04.863132 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-10 00:45:04.863142 | orchestrator | 2026-03-10 00:45:04.863152 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:45:04.863162 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 00:45:04.863173 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 00:45:04.863182 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 00:45:04.863192 | orchestrator | 2026-03-10 00:45:04.863201 | orchestrator | 2026-03-10 00:45:04.863211 | orchestrator | 2026-03-10 00:45:04.863220 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:45:04.863230 | orchestrator | Tuesday 10 March 2026 00:45:04 +0000 (0:00:01.025) 0:00:44.727 ********* 2026-03-10 00:45:04.863250 | orchestrator | =============================================================================== 2026-03-10 00:45:04.863259 | orchestrator | Write configuration file ------------------------------------------------ 3.98s 2026-03-10 00:45:04.863269 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.33s 2026-03-10 00:45:04.863299 | orchestrator | Add known partitions to the list of available block devices ------------- 1.29s 2026-03-10 00:45:04.863309 | orchestrator | Add known links to the list of available block devices ------------------ 1.22s 2026-03-10 00:45:04.863318 | orchestrator | Add known partitions to the list of available block devices ------------- 1.18s 2026-03-10 00:45:04.863328 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2026-03-10 00:45:04.863338 | orchestrator | Add known links to the list of available block devices ------------------ 0.95s 2026-03-10 00:45:04.863347 | orchestrator | Print configuration data ------------------------------------------------ 0.94s 2026-03-10 00:45:04.863356 | orchestrator | Add known links to the list of available block devices ------------------ 0.83s 2026-03-10 00:45:04.863394 | orchestrator | Add known partitions to the list of available block devices ------------- 0.77s 2026-03-10 00:45:04.863404 | orchestrator | Get initial list of available block devices ----------------------------- 0.77s 2026-03-10 00:45:04.863414 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.73s 2026-03-10 00:45:04.863423 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2026-03-10 00:45:04.863445 | orchestrator | Add known partitions to the list of available block devices ------------- 0.72s 2026-03-10 00:45:05.268740 | orchestrator | Add known links to the list of available block devices ------------------ 0.71s 2026-03-10 00:45:05.268832 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2026-03-10 00:45:05.268845 | orchestrator | Print WAL devices ------------------------------------------------------- 0.69s 2026-03-10 00:45:05.268854 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2026-03-10 00:45:05.268863 | orchestrator | Set DB devices config data ---------------------------------------------- 0.67s 2026-03-10 00:45:05.268872 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-03-10 00:45:28.097765 | orchestrator | 2026-03-10 00:45:28 | INFO  | Task d20648b0-216b-495b-9efd-a4f9b6be3661 (sync inventory) is running in background. Output coming soon. 2026-03-10 00:45:56.045961 | orchestrator | 2026-03-10 00:45:30 | INFO  | Starting group_vars file reorganization 2026-03-10 00:45:56.046134 | orchestrator | 2026-03-10 00:45:30 | INFO  | Moved 0 file(s) to their respective directories 2026-03-10 00:45:56.046154 | orchestrator | 2026-03-10 00:45:30 | INFO  | Group_vars file reorganization completed 2026-03-10 00:45:56.046166 | orchestrator | 2026-03-10 00:45:33 | INFO  | Starting variable preparation from inventory 2026-03-10 00:45:56.046178 | orchestrator | 2026-03-10 00:45:36 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-10 00:45:56.046190 | orchestrator | 2026-03-10 00:45:36 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-10 00:45:56.046217 | orchestrator | 2026-03-10 00:45:36 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-10 00:45:56.046229 | orchestrator | 2026-03-10 00:45:36 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-10 00:45:56.046241 | orchestrator | 2026-03-10 00:45:36 | INFO  | Variable preparation completed 2026-03-10 00:45:56.046261 | orchestrator | 2026-03-10 00:45:38 | INFO  | Starting inventory overwrite handling 2026-03-10 00:45:56.046281 | orchestrator | 2026-03-10 00:45:38 | INFO  | Handling group overwrites in 99-overwrite 2026-03-10 00:45:56.046300 | orchestrator | 2026-03-10 00:45:38 | INFO  | Removing group frr:children from 60-generic 2026-03-10 00:45:56.046397 | orchestrator | 2026-03-10 00:45:38 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-10 00:45:56.046411 | orchestrator | 2026-03-10 00:45:38 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-10 00:45:56.046422 | orchestrator | 2026-03-10 00:45:38 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-10 00:45:56.046433 | orchestrator | 2026-03-10 00:45:38 | INFO  | Handling group overwrites in 20-roles 2026-03-10 00:45:56.046444 | orchestrator | 2026-03-10 00:45:38 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-10 00:45:56.046455 | orchestrator | 2026-03-10 00:45:38 | INFO  | Removed 5 group(s) in total 2026-03-10 00:45:56.046466 | orchestrator | 2026-03-10 00:45:38 | INFO  | Inventory overwrite handling completed 2026-03-10 00:45:56.046477 | orchestrator | 2026-03-10 00:45:39 | INFO  | Starting merge of inventory files 2026-03-10 00:45:56.046487 | orchestrator | 2026-03-10 00:45:39 | INFO  | Inventory files merged successfully 2026-03-10 00:45:56.046498 | orchestrator | 2026-03-10 00:45:44 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-10 00:45:56.046509 | orchestrator | 2026-03-10 00:45:54 | INFO  | Successfully wrote ClusterShell configuration 2026-03-10 00:45:56.046523 | orchestrator | [master c088a49] 2026-03-10-00-45 2026-03-10 00:45:56.046536 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-10 00:45:58.489635 | orchestrator | 2026-03-10 00:45:58 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-10 00:45:58.554608 | orchestrator | 2026-03-10 00:45:58 | INFO  | Task 76b77f70-d2bb-4c16-96f4-c9d7bd2c1ac1 (ceph-create-lvm-devices) was prepared for execution. 2026-03-10 00:45:58.554717 | orchestrator | 2026-03-10 00:45:58 | INFO  | It takes a moment until task 76b77f70-d2bb-4c16-96f4-c9d7bd2c1ac1 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-10 00:46:12.252984 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-10 00:46:12.253090 | orchestrator | 2.16.14 2026-03-10 00:46:12.253105 | orchestrator | 2026-03-10 00:46:12.253117 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-10 00:46:12.253129 | orchestrator | 2026-03-10 00:46:12.253141 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-10 00:46:12.253152 | orchestrator | Tuesday 10 March 2026 00:46:03 +0000 (0:00:00.337) 0:00:00.337 ********* 2026-03-10 00:46:12.253164 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-10 00:46:12.253176 | orchestrator | 2026-03-10 00:46:12.253187 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-10 00:46:12.253198 | orchestrator | Tuesday 10 March 2026 00:46:03 +0000 (0:00:00.251) 0:00:00.589 ********* 2026-03-10 00:46:12.253209 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:46:12.253220 | orchestrator | 2026-03-10 00:46:12.253231 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.253241 | orchestrator | Tuesday 10 March 2026 00:46:03 +0000 (0:00:00.268) 0:00:00.857 ********* 2026-03-10 00:46:12.253252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-10 00:46:12.253263 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-10 00:46:12.253273 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-10 00:46:12.253284 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-10 00:46:12.253295 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-10 00:46:12.253305 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-10 00:46:12.253348 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-10 00:46:12.253403 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-10 00:46:12.253423 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-10 00:46:12.253443 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-10 00:46:12.253455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-10 00:46:12.253465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-10 00:46:12.253476 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-10 00:46:12.253487 | orchestrator | 2026-03-10 00:46:12.253497 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.253508 | orchestrator | Tuesday 10 March 2026 00:46:04 +0000 (0:00:00.585) 0:00:01.442 ********* 2026-03-10 00:46:12.253520 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.253532 | orchestrator | 2026-03-10 00:46:12.253544 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.253556 | orchestrator | Tuesday 10 March 2026 00:46:04 +0000 (0:00:00.216) 0:00:01.659 ********* 2026-03-10 00:46:12.253569 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.253581 | orchestrator | 2026-03-10 00:46:12.253593 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.253606 | orchestrator | Tuesday 10 March 2026 00:46:04 +0000 (0:00:00.246) 0:00:01.905 ********* 2026-03-10 00:46:12.253618 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.253630 | orchestrator | 2026-03-10 00:46:12.253642 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.253654 | orchestrator | Tuesday 10 March 2026 00:46:05 +0000 (0:00:00.216) 0:00:02.122 ********* 2026-03-10 00:46:12.253666 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.253678 | orchestrator | 2026-03-10 00:46:12.253690 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.253702 | orchestrator | Tuesday 10 March 2026 00:46:05 +0000 (0:00:00.184) 0:00:02.306 ********* 2026-03-10 00:46:12.253714 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.253726 | orchestrator | 2026-03-10 00:46:12.253738 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.253768 | orchestrator | Tuesday 10 March 2026 00:46:05 +0000 (0:00:00.213) 0:00:02.519 ********* 2026-03-10 00:46:12.253781 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.253793 | orchestrator | 2026-03-10 00:46:12.253805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.253817 | orchestrator | Tuesday 10 March 2026 00:46:05 +0000 (0:00:00.238) 0:00:02.758 ********* 2026-03-10 00:46:12.253830 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.253841 | orchestrator | 2026-03-10 00:46:12.253854 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.253866 | orchestrator | Tuesday 10 March 2026 00:46:05 +0000 (0:00:00.275) 0:00:03.033 ********* 2026-03-10 00:46:12.253878 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.253888 | orchestrator | 2026-03-10 00:46:12.253899 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.253910 | orchestrator | Tuesday 10 March 2026 00:46:06 +0000 (0:00:00.213) 0:00:03.247 ********* 2026-03-10 00:46:12.253920 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba) 2026-03-10 00:46:12.253932 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba) 2026-03-10 00:46:12.253942 | orchestrator | 2026-03-10 00:46:12.253953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.253981 | orchestrator | Tuesday 10 March 2026 00:46:06 +0000 (0:00:00.482) 0:00:03.730 ********* 2026-03-10 00:46:12.254001 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8f76f090-a1e0-42c3-8072-1f51d4df9a8c) 2026-03-10 00:46:12.254012 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8f76f090-a1e0-42c3-8072-1f51d4df9a8c) 2026-03-10 00:46:12.254085 | orchestrator | 2026-03-10 00:46:12.254096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.254107 | orchestrator | Tuesday 10 March 2026 00:46:07 +0000 (0:00:00.731) 0:00:04.461 ********* 2026-03-10 00:46:12.254118 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e4712c11-e6a0-4829-954c-3e21e73d266a) 2026-03-10 00:46:12.254128 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e4712c11-e6a0-4829-954c-3e21e73d266a) 2026-03-10 00:46:12.254139 | orchestrator | 2026-03-10 00:46:12.254150 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.254161 | orchestrator | Tuesday 10 March 2026 00:46:08 +0000 (0:00:00.987) 0:00:05.449 ********* 2026-03-10 00:46:12.254171 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_5b638158-044f-4e2c-a80d-2256f7b00733) 2026-03-10 00:46:12.254182 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_5b638158-044f-4e2c-a80d-2256f7b00733) 2026-03-10 00:46:12.254192 | orchestrator | 2026-03-10 00:46:12.254203 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:12.254214 | orchestrator | Tuesday 10 March 2026 00:46:09 +0000 (0:00:01.278) 0:00:06.728 ********* 2026-03-10 00:46:12.254225 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-10 00:46:12.254235 | orchestrator | 2026-03-10 00:46:12.254246 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:12.254256 | orchestrator | Tuesday 10 March 2026 00:46:10 +0000 (0:00:00.447) 0:00:07.175 ********* 2026-03-10 00:46:12.254267 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-10 00:46:12.254278 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-10 00:46:12.254288 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-10 00:46:12.254299 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-10 00:46:12.254309 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-10 00:46:12.254372 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-10 00:46:12.254385 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-10 00:46:12.254396 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-10 00:46:12.254406 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-10 00:46:12.254417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-10 00:46:12.254427 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-10 00:46:12.254438 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-10 00:46:12.254448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-10 00:46:12.254459 | orchestrator | 2026-03-10 00:46:12.254470 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:12.254480 | orchestrator | Tuesday 10 March 2026 00:46:10 +0000 (0:00:00.462) 0:00:07.637 ********* 2026-03-10 00:46:12.254491 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.254501 | orchestrator | 2026-03-10 00:46:12.254512 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:12.254522 | orchestrator | Tuesday 10 March 2026 00:46:10 +0000 (0:00:00.221) 0:00:07.859 ********* 2026-03-10 00:46:12.254541 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.254552 | orchestrator | 2026-03-10 00:46:12.254563 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:12.254573 | orchestrator | Tuesday 10 March 2026 00:46:10 +0000 (0:00:00.212) 0:00:08.072 ********* 2026-03-10 00:46:12.254584 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.254594 | orchestrator | 2026-03-10 00:46:12.254605 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:12.254616 | orchestrator | Tuesday 10 March 2026 00:46:11 +0000 (0:00:00.202) 0:00:08.274 ********* 2026-03-10 00:46:12.254626 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.254637 | orchestrator | 2026-03-10 00:46:12.254647 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:12.254658 | orchestrator | Tuesday 10 March 2026 00:46:11 +0000 (0:00:00.240) 0:00:08.515 ********* 2026-03-10 00:46:12.254669 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.254679 | orchestrator | 2026-03-10 00:46:12.254690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:12.254700 | orchestrator | Tuesday 10 March 2026 00:46:11 +0000 (0:00:00.252) 0:00:08.768 ********* 2026-03-10 00:46:12.254711 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.254722 | orchestrator | 2026-03-10 00:46:12.254732 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:12.254743 | orchestrator | Tuesday 10 March 2026 00:46:11 +0000 (0:00:00.309) 0:00:09.077 ********* 2026-03-10 00:46:12.254753 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:12.254764 | orchestrator | 2026-03-10 00:46:12.254782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:21.149901 | orchestrator | Tuesday 10 March 2026 00:46:12 +0000 (0:00:00.252) 0:00:09.330 ********* 2026-03-10 00:46:21.150009 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.150088 | orchestrator | 2026-03-10 00:46:21.150101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:21.150113 | orchestrator | Tuesday 10 March 2026 00:46:12 +0000 (0:00:00.241) 0:00:09.571 ********* 2026-03-10 00:46:21.150124 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-10 00:46:21.150136 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-10 00:46:21.150148 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-10 00:46:21.150159 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-10 00:46:21.150170 | orchestrator | 2026-03-10 00:46:21.150181 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:21.150191 | orchestrator | Tuesday 10 March 2026 00:46:13 +0000 (0:00:01.317) 0:00:10.889 ********* 2026-03-10 00:46:21.150202 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.150213 | orchestrator | 2026-03-10 00:46:21.150224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:21.150235 | orchestrator | Tuesday 10 March 2026 00:46:14 +0000 (0:00:00.248) 0:00:11.137 ********* 2026-03-10 00:46:21.150246 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.150257 | orchestrator | 2026-03-10 00:46:21.150268 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:21.150279 | orchestrator | Tuesday 10 March 2026 00:46:14 +0000 (0:00:00.216) 0:00:11.354 ********* 2026-03-10 00:46:21.150290 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.150301 | orchestrator | 2026-03-10 00:46:21.150359 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:21.150370 | orchestrator | Tuesday 10 March 2026 00:46:14 +0000 (0:00:00.227) 0:00:11.582 ********* 2026-03-10 00:46:21.150381 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.150392 | orchestrator | 2026-03-10 00:46:21.150404 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-10 00:46:21.150419 | orchestrator | Tuesday 10 March 2026 00:46:14 +0000 (0:00:00.202) 0:00:11.784 ********* 2026-03-10 00:46:21.150431 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.150467 | orchestrator | 2026-03-10 00:46:21.150480 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-10 00:46:21.150492 | orchestrator | Tuesday 10 March 2026 00:46:14 +0000 (0:00:00.144) 0:00:11.930 ********* 2026-03-10 00:46:21.150505 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'}}) 2026-03-10 00:46:21.150520 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1e5abf04-63a5-5f41-bb2b-61caa92fdc91'}}) 2026-03-10 00:46:21.150532 | orchestrator | 2026-03-10 00:46:21.150545 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-10 00:46:21.150558 | orchestrator | Tuesday 10 March 2026 00:46:15 +0000 (0:00:00.188) 0:00:12.118 ********* 2026-03-10 00:46:21.150571 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'}) 2026-03-10 00:46:21.150585 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'}) 2026-03-10 00:46:21.150597 | orchestrator | 2026-03-10 00:46:21.150609 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-10 00:46:21.150621 | orchestrator | Tuesday 10 March 2026 00:46:17 +0000 (0:00:02.090) 0:00:14.209 ********* 2026-03-10 00:46:21.150633 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:21.150647 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:21.150659 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.150671 | orchestrator | 2026-03-10 00:46:21.150685 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-10 00:46:21.150697 | orchestrator | Tuesday 10 March 2026 00:46:17 +0000 (0:00:00.159) 0:00:14.369 ********* 2026-03-10 00:46:21.150710 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'}) 2026-03-10 00:46:21.150722 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'}) 2026-03-10 00:46:21.150734 | orchestrator | 2026-03-10 00:46:21.150763 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-10 00:46:21.150776 | orchestrator | Tuesday 10 March 2026 00:46:18 +0000 (0:00:01.529) 0:00:15.899 ********* 2026-03-10 00:46:21.150788 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:21.150800 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:21.150811 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.150821 | orchestrator | 2026-03-10 00:46:21.150832 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-10 00:46:21.150843 | orchestrator | Tuesday 10 March 2026 00:46:18 +0000 (0:00:00.159) 0:00:16.058 ********* 2026-03-10 00:46:21.150871 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.150883 | orchestrator | 2026-03-10 00:46:21.150893 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-10 00:46:21.150904 | orchestrator | Tuesday 10 March 2026 00:46:19 +0000 (0:00:00.193) 0:00:16.252 ********* 2026-03-10 00:46:21.150915 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:21.150925 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:21.150944 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.150955 | orchestrator | 2026-03-10 00:46:21.150965 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-10 00:46:21.150976 | orchestrator | Tuesday 10 March 2026 00:46:19 +0000 (0:00:00.482) 0:00:16.734 ********* 2026-03-10 00:46:21.150987 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.150997 | orchestrator | 2026-03-10 00:46:21.151008 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-10 00:46:21.151018 | orchestrator | Tuesday 10 March 2026 00:46:19 +0000 (0:00:00.154) 0:00:16.889 ********* 2026-03-10 00:46:21.151029 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:21.151040 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:21.151050 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.151061 | orchestrator | 2026-03-10 00:46:21.151072 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-10 00:46:21.151082 | orchestrator | Tuesday 10 March 2026 00:46:19 +0000 (0:00:00.175) 0:00:17.064 ********* 2026-03-10 00:46:21.151156 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.151170 | orchestrator | 2026-03-10 00:46:21.151180 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-10 00:46:21.151191 | orchestrator | Tuesday 10 March 2026 00:46:20 +0000 (0:00:00.198) 0:00:17.263 ********* 2026-03-10 00:46:21.151202 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:21.151220 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:21.151232 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.151242 | orchestrator | 2026-03-10 00:46:21.151253 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-10 00:46:21.151264 | orchestrator | Tuesday 10 March 2026 00:46:20 +0000 (0:00:00.170) 0:00:17.433 ********* 2026-03-10 00:46:21.151274 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:46:21.151285 | orchestrator | 2026-03-10 00:46:21.151296 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-10 00:46:21.151306 | orchestrator | Tuesday 10 March 2026 00:46:20 +0000 (0:00:00.163) 0:00:17.597 ********* 2026-03-10 00:46:21.151343 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:21.151354 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:21.151365 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.151376 | orchestrator | 2026-03-10 00:46:21.151386 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-10 00:46:21.151397 | orchestrator | Tuesday 10 March 2026 00:46:20 +0000 (0:00:00.165) 0:00:17.763 ********* 2026-03-10 00:46:21.151408 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:21.151419 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:21.151429 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.151440 | orchestrator | 2026-03-10 00:46:21.151466 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-10 00:46:21.151486 | orchestrator | Tuesday 10 March 2026 00:46:20 +0000 (0:00:00.165) 0:00:17.928 ********* 2026-03-10 00:46:21.151496 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:21.151507 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:21.151518 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.151528 | orchestrator | 2026-03-10 00:46:21.151539 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-10 00:46:21.151550 | orchestrator | Tuesday 10 March 2026 00:46:21 +0000 (0:00:00.170) 0:00:18.098 ********* 2026-03-10 00:46:21.151561 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:21.151571 | orchestrator | 2026-03-10 00:46:21.151582 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-10 00:46:21.151602 | orchestrator | Tuesday 10 March 2026 00:46:21 +0000 (0:00:00.131) 0:00:18.229 ********* 2026-03-10 00:46:27.843190 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.844007 | orchestrator | 2026-03-10 00:46:27.844052 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-10 00:46:27.844074 | orchestrator | Tuesday 10 March 2026 00:46:21 +0000 (0:00:00.125) 0:00:18.355 ********* 2026-03-10 00:46:27.844092 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.844103 | orchestrator | 2026-03-10 00:46:27.844113 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-10 00:46:27.844123 | orchestrator | Tuesday 10 March 2026 00:46:21 +0000 (0:00:00.140) 0:00:18.495 ********* 2026-03-10 00:46:27.844132 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 00:46:27.844142 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-10 00:46:27.844152 | orchestrator | } 2026-03-10 00:46:27.844162 | orchestrator | 2026-03-10 00:46:27.844171 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-10 00:46:27.844181 | orchestrator | Tuesday 10 March 2026 00:46:21 +0000 (0:00:00.372) 0:00:18.868 ********* 2026-03-10 00:46:27.844190 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 00:46:27.844200 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-10 00:46:27.844210 | orchestrator | } 2026-03-10 00:46:27.844220 | orchestrator | 2026-03-10 00:46:27.844230 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-10 00:46:27.844239 | orchestrator | Tuesday 10 March 2026 00:46:21 +0000 (0:00:00.156) 0:00:19.025 ********* 2026-03-10 00:46:27.844249 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 00:46:27.844258 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-10 00:46:27.844268 | orchestrator | } 2026-03-10 00:46:27.844278 | orchestrator | 2026-03-10 00:46:27.844287 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-10 00:46:27.844297 | orchestrator | Tuesday 10 March 2026 00:46:22 +0000 (0:00:00.165) 0:00:19.191 ********* 2026-03-10 00:46:27.844345 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:46:27.844363 | orchestrator | 2026-03-10 00:46:27.844379 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-10 00:46:27.844396 | orchestrator | Tuesday 10 March 2026 00:46:22 +0000 (0:00:00.720) 0:00:19.911 ********* 2026-03-10 00:46:27.844415 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:46:27.844431 | orchestrator | 2026-03-10 00:46:27.844447 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-10 00:46:27.844460 | orchestrator | Tuesday 10 March 2026 00:46:23 +0000 (0:00:00.573) 0:00:20.485 ********* 2026-03-10 00:46:27.844469 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:46:27.844479 | orchestrator | 2026-03-10 00:46:27.844488 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-10 00:46:27.844497 | orchestrator | Tuesday 10 March 2026 00:46:24 +0000 (0:00:00.619) 0:00:21.105 ********* 2026-03-10 00:46:27.844507 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:46:27.844516 | orchestrator | 2026-03-10 00:46:27.844547 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-10 00:46:27.844557 | orchestrator | Tuesday 10 March 2026 00:46:24 +0000 (0:00:00.186) 0:00:21.291 ********* 2026-03-10 00:46:27.844567 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.844576 | orchestrator | 2026-03-10 00:46:27.844586 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-10 00:46:27.844596 | orchestrator | Tuesday 10 March 2026 00:46:24 +0000 (0:00:00.137) 0:00:21.429 ********* 2026-03-10 00:46:27.844605 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.844614 | orchestrator | 2026-03-10 00:46:27.844624 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-10 00:46:27.844633 | orchestrator | Tuesday 10 March 2026 00:46:24 +0000 (0:00:00.145) 0:00:21.574 ********* 2026-03-10 00:46:27.844643 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 00:46:27.844658 | orchestrator |  "vgs_report": { 2026-03-10 00:46:27.844675 | orchestrator |  "vg": [] 2026-03-10 00:46:27.844691 | orchestrator |  } 2026-03-10 00:46:27.844708 | orchestrator | } 2026-03-10 00:46:27.844726 | orchestrator | 2026-03-10 00:46:27.844741 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-10 00:46:27.844757 | orchestrator | Tuesday 10 March 2026 00:46:24 +0000 (0:00:00.148) 0:00:21.723 ********* 2026-03-10 00:46:27.844767 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.844777 | orchestrator | 2026-03-10 00:46:27.844786 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-10 00:46:27.844795 | orchestrator | Tuesday 10 March 2026 00:46:24 +0000 (0:00:00.157) 0:00:21.881 ********* 2026-03-10 00:46:27.844805 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.844814 | orchestrator | 2026-03-10 00:46:27.844831 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-10 00:46:27.844847 | orchestrator | Tuesday 10 March 2026 00:46:24 +0000 (0:00:00.146) 0:00:22.028 ********* 2026-03-10 00:46:27.844862 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.844878 | orchestrator | 2026-03-10 00:46:27.844892 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-10 00:46:27.844907 | orchestrator | Tuesday 10 March 2026 00:46:25 +0000 (0:00:00.383) 0:00:22.411 ********* 2026-03-10 00:46:27.844921 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.844935 | orchestrator | 2026-03-10 00:46:27.844951 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-10 00:46:27.844966 | orchestrator | Tuesday 10 March 2026 00:46:25 +0000 (0:00:00.184) 0:00:22.596 ********* 2026-03-10 00:46:27.844981 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.844997 | orchestrator | 2026-03-10 00:46:27.845013 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-10 00:46:27.845030 | orchestrator | Tuesday 10 March 2026 00:46:25 +0000 (0:00:00.139) 0:00:22.736 ********* 2026-03-10 00:46:27.845046 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.845061 | orchestrator | 2026-03-10 00:46:27.845077 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-10 00:46:27.845094 | orchestrator | Tuesday 10 March 2026 00:46:25 +0000 (0:00:00.127) 0:00:22.863 ********* 2026-03-10 00:46:27.845111 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.845126 | orchestrator | 2026-03-10 00:46:27.845141 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-10 00:46:27.845158 | orchestrator | Tuesday 10 March 2026 00:46:25 +0000 (0:00:00.126) 0:00:22.990 ********* 2026-03-10 00:46:27.845198 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.845216 | orchestrator | 2026-03-10 00:46:27.845233 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-10 00:46:27.845251 | orchestrator | Tuesday 10 March 2026 00:46:26 +0000 (0:00:00.125) 0:00:23.115 ********* 2026-03-10 00:46:27.845268 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.845284 | orchestrator | 2026-03-10 00:46:27.845323 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-10 00:46:27.845359 | orchestrator | Tuesday 10 March 2026 00:46:26 +0000 (0:00:00.120) 0:00:23.236 ********* 2026-03-10 00:46:27.845437 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.845449 | orchestrator | 2026-03-10 00:46:27.845459 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-10 00:46:27.845469 | orchestrator | Tuesday 10 March 2026 00:46:26 +0000 (0:00:00.133) 0:00:23.369 ********* 2026-03-10 00:46:27.845478 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.845489 | orchestrator | 2026-03-10 00:46:27.845526 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-10 00:46:27.845547 | orchestrator | Tuesday 10 March 2026 00:46:26 +0000 (0:00:00.141) 0:00:23.510 ********* 2026-03-10 00:46:27.845564 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.845579 | orchestrator | 2026-03-10 00:46:27.845589 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-10 00:46:27.845600 | orchestrator | Tuesday 10 March 2026 00:46:26 +0000 (0:00:00.129) 0:00:23.640 ********* 2026-03-10 00:46:27.845612 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.845658 | orchestrator | 2026-03-10 00:46:27.845671 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-10 00:46:27.845682 | orchestrator | Tuesday 10 March 2026 00:46:26 +0000 (0:00:00.126) 0:00:23.767 ********* 2026-03-10 00:46:27.845692 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.845704 | orchestrator | 2026-03-10 00:46:27.845715 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-10 00:46:27.845726 | orchestrator | Tuesday 10 March 2026 00:46:26 +0000 (0:00:00.129) 0:00:23.897 ********* 2026-03-10 00:46:27.845738 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:27.845751 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:27.845760 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.845770 | orchestrator | 2026-03-10 00:46:27.845779 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-10 00:46:27.845794 | orchestrator | Tuesday 10 March 2026 00:46:27 +0000 (0:00:00.305) 0:00:24.203 ********* 2026-03-10 00:46:27.845804 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:27.845814 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:27.845823 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.845833 | orchestrator | 2026-03-10 00:46:27.845842 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-10 00:46:27.845852 | orchestrator | Tuesday 10 March 2026 00:46:27 +0000 (0:00:00.158) 0:00:24.361 ********* 2026-03-10 00:46:27.845861 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:27.845871 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:27.845880 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.845890 | orchestrator | 2026-03-10 00:46:27.845899 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-10 00:46:27.845909 | orchestrator | Tuesday 10 March 2026 00:46:27 +0000 (0:00:00.167) 0:00:24.529 ********* 2026-03-10 00:46:27.845918 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:27.845928 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:27.845955 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.845973 | orchestrator | 2026-03-10 00:46:27.845992 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-10 00:46:27.846009 | orchestrator | Tuesday 10 March 2026 00:46:27 +0000 (0:00:00.141) 0:00:24.671 ********* 2026-03-10 00:46:27.846073 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:27.846084 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:27.846093 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:27.846103 | orchestrator | 2026-03-10 00:46:27.846112 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-10 00:46:27.846122 | orchestrator | Tuesday 10 March 2026 00:46:27 +0000 (0:00:00.180) 0:00:24.851 ********* 2026-03-10 00:46:27.846173 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:33.213671 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:33.213735 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:33.213745 | orchestrator | 2026-03-10 00:46:33.213753 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-10 00:46:33.213761 | orchestrator | Tuesday 10 March 2026 00:46:27 +0000 (0:00:00.169) 0:00:25.020 ********* 2026-03-10 00:46:33.213768 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:33.213776 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:33.213783 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:33.213789 | orchestrator | 2026-03-10 00:46:33.213797 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-10 00:46:33.213804 | orchestrator | Tuesday 10 March 2026 00:46:28 +0000 (0:00:00.196) 0:00:25.217 ********* 2026-03-10 00:46:33.213811 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:33.213818 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:33.213825 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:33.213832 | orchestrator | 2026-03-10 00:46:33.213839 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-10 00:46:33.213846 | orchestrator | Tuesday 10 March 2026 00:46:28 +0000 (0:00:00.169) 0:00:25.386 ********* 2026-03-10 00:46:33.213853 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:46:33.213860 | orchestrator | 2026-03-10 00:46:33.213868 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-10 00:46:33.213875 | orchestrator | Tuesday 10 March 2026 00:46:28 +0000 (0:00:00.527) 0:00:25.914 ********* 2026-03-10 00:46:33.213881 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:46:33.213888 | orchestrator | 2026-03-10 00:46:33.213895 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-10 00:46:33.213912 | orchestrator | Tuesday 10 March 2026 00:46:29 +0000 (0:00:00.544) 0:00:26.459 ********* 2026-03-10 00:46:33.213919 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:46:33.213926 | orchestrator | 2026-03-10 00:46:33.213933 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-10 00:46:33.213940 | orchestrator | Tuesday 10 March 2026 00:46:29 +0000 (0:00:00.136) 0:00:26.595 ********* 2026-03-10 00:46:33.213960 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'vg_name': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'}) 2026-03-10 00:46:33.213968 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'vg_name': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'}) 2026-03-10 00:46:33.213975 | orchestrator | 2026-03-10 00:46:33.213982 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-10 00:46:33.213989 | orchestrator | Tuesday 10 March 2026 00:46:29 +0000 (0:00:00.159) 0:00:26.755 ********* 2026-03-10 00:46:33.213995 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:33.214002 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:33.214010 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:33.214046 | orchestrator | 2026-03-10 00:46:33.214053 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-10 00:46:33.214060 | orchestrator | Tuesday 10 March 2026 00:46:29 +0000 (0:00:00.303) 0:00:27.059 ********* 2026-03-10 00:46:33.214067 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:33.214074 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:33.214081 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:33.214088 | orchestrator | 2026-03-10 00:46:33.214095 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-10 00:46:33.214102 | orchestrator | Tuesday 10 March 2026 00:46:30 +0000 (0:00:00.169) 0:00:27.228 ********* 2026-03-10 00:46:33.214109 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'})  2026-03-10 00:46:33.214116 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'})  2026-03-10 00:46:33.214123 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:46:33.214130 | orchestrator | 2026-03-10 00:46:33.214136 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-10 00:46:33.214143 | orchestrator | Tuesday 10 March 2026 00:46:30 +0000 (0:00:00.155) 0:00:27.384 ********* 2026-03-10 00:46:33.214161 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 00:46:33.214168 | orchestrator |  "lvm_report": { 2026-03-10 00:46:33.214175 | orchestrator |  "lv": [ 2026-03-10 00:46:33.214182 | orchestrator |  { 2026-03-10 00:46:33.214189 | orchestrator |  "lv_name": "osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91", 2026-03-10 00:46:33.214196 | orchestrator |  "vg_name": "ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91" 2026-03-10 00:46:33.214203 | orchestrator |  }, 2026-03-10 00:46:33.214210 | orchestrator |  { 2026-03-10 00:46:33.214217 | orchestrator |  "lv_name": "osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df", 2026-03-10 00:46:33.214224 | orchestrator |  "vg_name": "ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df" 2026-03-10 00:46:33.214231 | orchestrator |  } 2026-03-10 00:46:33.214238 | orchestrator |  ], 2026-03-10 00:46:33.214245 | orchestrator |  "pv": [ 2026-03-10 00:46:33.214252 | orchestrator |  { 2026-03-10 00:46:33.214259 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-10 00:46:33.214266 | orchestrator |  "vg_name": "ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df" 2026-03-10 00:46:33.214273 | orchestrator |  }, 2026-03-10 00:46:33.214280 | orchestrator |  { 2026-03-10 00:46:33.214291 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-10 00:46:33.214340 | orchestrator |  "vg_name": "ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91" 2026-03-10 00:46:33.214348 | orchestrator |  } 2026-03-10 00:46:33.214355 | orchestrator |  ] 2026-03-10 00:46:33.214362 | orchestrator |  } 2026-03-10 00:46:33.214369 | orchestrator | } 2026-03-10 00:46:33.214376 | orchestrator | 2026-03-10 00:46:33.214383 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-10 00:46:33.214390 | orchestrator | 2026-03-10 00:46:33.214396 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-10 00:46:33.214402 | orchestrator | Tuesday 10 March 2026 00:46:30 +0000 (0:00:00.295) 0:00:27.680 ********* 2026-03-10 00:46:33.214408 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-10 00:46:33.214413 | orchestrator | 2026-03-10 00:46:33.214419 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-10 00:46:33.214426 | orchestrator | Tuesday 10 March 2026 00:46:30 +0000 (0:00:00.331) 0:00:28.011 ********* 2026-03-10 00:46:33.214432 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:46:33.214438 | orchestrator | 2026-03-10 00:46:33.214444 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:33.214451 | orchestrator | Tuesday 10 March 2026 00:46:31 +0000 (0:00:00.224) 0:00:28.235 ********* 2026-03-10 00:46:33.214458 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-10 00:46:33.214465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-10 00:46:33.214472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-10 00:46:33.214479 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-10 00:46:33.214485 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-10 00:46:33.214492 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-10 00:46:33.214499 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-10 00:46:33.214506 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-10 00:46:33.214512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-10 00:46:33.214519 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-10 00:46:33.214526 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-10 00:46:33.214533 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-10 00:46:33.214540 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-10 00:46:33.214547 | orchestrator | 2026-03-10 00:46:33.214554 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:33.214560 | orchestrator | Tuesday 10 March 2026 00:46:31 +0000 (0:00:00.416) 0:00:28.652 ********* 2026-03-10 00:46:33.214567 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:33.214574 | orchestrator | 2026-03-10 00:46:33.214581 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:33.214594 | orchestrator | Tuesday 10 March 2026 00:46:31 +0000 (0:00:00.260) 0:00:28.912 ********* 2026-03-10 00:46:33.214600 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:33.214607 | orchestrator | 2026-03-10 00:46:33.214614 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:33.214621 | orchestrator | Tuesday 10 March 2026 00:46:32 +0000 (0:00:00.198) 0:00:29.111 ********* 2026-03-10 00:46:33.214627 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:33.214634 | orchestrator | 2026-03-10 00:46:33.214641 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:33.214653 | orchestrator | Tuesday 10 March 2026 00:46:32 +0000 (0:00:00.573) 0:00:29.685 ********* 2026-03-10 00:46:33.214660 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:33.214667 | orchestrator | 2026-03-10 00:46:33.214674 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:33.214680 | orchestrator | Tuesday 10 March 2026 00:46:32 +0000 (0:00:00.190) 0:00:29.875 ********* 2026-03-10 00:46:33.214687 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:33.214694 | orchestrator | 2026-03-10 00:46:33.214701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:33.214708 | orchestrator | Tuesday 10 March 2026 00:46:32 +0000 (0:00:00.209) 0:00:30.084 ********* 2026-03-10 00:46:33.214715 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:33.214720 | orchestrator | 2026-03-10 00:46:33.214733 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:44.003052 | orchestrator | Tuesday 10 March 2026 00:46:33 +0000 (0:00:00.210) 0:00:30.295 ********* 2026-03-10 00:46:44.003124 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003135 | orchestrator | 2026-03-10 00:46:44.003143 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:44.003147 | orchestrator | Tuesday 10 March 2026 00:46:33 +0000 (0:00:00.198) 0:00:30.493 ********* 2026-03-10 00:46:44.003151 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003154 | orchestrator | 2026-03-10 00:46:44.003158 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:44.003162 | orchestrator | Tuesday 10 March 2026 00:46:33 +0000 (0:00:00.195) 0:00:30.689 ********* 2026-03-10 00:46:44.003212 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d) 2026-03-10 00:46:44.003218 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d) 2026-03-10 00:46:44.003222 | orchestrator | 2026-03-10 00:46:44.003226 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:44.003229 | orchestrator | Tuesday 10 March 2026 00:46:34 +0000 (0:00:00.404) 0:00:31.094 ********* 2026-03-10 00:46:44.003233 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b94fdc5f-2b9b-46a8-a60f-74e41f269a0d) 2026-03-10 00:46:44.003237 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b94fdc5f-2b9b-46a8-a60f-74e41f269a0d) 2026-03-10 00:46:44.003241 | orchestrator | 2026-03-10 00:46:44.003245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:44.003248 | orchestrator | Tuesday 10 March 2026 00:46:34 +0000 (0:00:00.393) 0:00:31.487 ********* 2026-03-10 00:46:44.003252 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_32f512e5-1c04-4680-91d7-4268581c2350) 2026-03-10 00:46:44.003256 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_32f512e5-1c04-4680-91d7-4268581c2350) 2026-03-10 00:46:44.003260 | orchestrator | 2026-03-10 00:46:44.003263 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:44.003267 | orchestrator | Tuesday 10 March 2026 00:46:34 +0000 (0:00:00.437) 0:00:31.925 ********* 2026-03-10 00:46:44.003279 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_21ab9d1e-083b-4748-865b-4e7341aec385) 2026-03-10 00:46:44.003282 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_21ab9d1e-083b-4748-865b-4e7341aec385) 2026-03-10 00:46:44.003311 | orchestrator | 2026-03-10 00:46:44.003315 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:46:44.003319 | orchestrator | Tuesday 10 March 2026 00:46:35 +0000 (0:00:00.564) 0:00:32.489 ********* 2026-03-10 00:46:44.003323 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-10 00:46:44.003327 | orchestrator | 2026-03-10 00:46:44.003330 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003334 | orchestrator | Tuesday 10 March 2026 00:46:35 +0000 (0:00:00.539) 0:00:33.029 ********* 2026-03-10 00:46:44.003391 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-10 00:46:44.003403 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-10 00:46:44.003407 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-10 00:46:44.003410 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-10 00:46:44.003414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-10 00:46:44.003418 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-10 00:46:44.003422 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-10 00:46:44.003425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-10 00:46:44.003429 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-10 00:46:44.003433 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-10 00:46:44.003436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-10 00:46:44.003442 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-10 00:46:44.003521 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-10 00:46:44.003562 | orchestrator | 2026-03-10 00:46:44.003568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003572 | orchestrator | Tuesday 10 March 2026 00:46:36 +0000 (0:00:00.774) 0:00:33.804 ********* 2026-03-10 00:46:44.003576 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003579 | orchestrator | 2026-03-10 00:46:44.003583 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003587 | orchestrator | Tuesday 10 March 2026 00:46:36 +0000 (0:00:00.205) 0:00:34.010 ********* 2026-03-10 00:46:44.003590 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003594 | orchestrator | 2026-03-10 00:46:44.003598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003602 | orchestrator | Tuesday 10 March 2026 00:46:37 +0000 (0:00:00.213) 0:00:34.224 ********* 2026-03-10 00:46:44.003605 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003609 | orchestrator | 2026-03-10 00:46:44.003622 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003627 | orchestrator | Tuesday 10 March 2026 00:46:37 +0000 (0:00:00.197) 0:00:34.421 ********* 2026-03-10 00:46:44.003637 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003646 | orchestrator | 2026-03-10 00:46:44.003650 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003654 | orchestrator | Tuesday 10 March 2026 00:46:37 +0000 (0:00:00.188) 0:00:34.610 ********* 2026-03-10 00:46:44.003663 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003668 | orchestrator | 2026-03-10 00:46:44.003673 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003677 | orchestrator | Tuesday 10 March 2026 00:46:37 +0000 (0:00:00.265) 0:00:34.876 ********* 2026-03-10 00:46:44.003682 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003686 | orchestrator | 2026-03-10 00:46:44.003690 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003695 | orchestrator | Tuesday 10 March 2026 00:46:38 +0000 (0:00:00.225) 0:00:35.101 ********* 2026-03-10 00:46:44.003698 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003702 | orchestrator | 2026-03-10 00:46:44.003706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003711 | orchestrator | Tuesday 10 March 2026 00:46:38 +0000 (0:00:00.224) 0:00:35.326 ********* 2026-03-10 00:46:44.003723 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003728 | orchestrator | 2026-03-10 00:46:44.003734 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003739 | orchestrator | Tuesday 10 March 2026 00:46:38 +0000 (0:00:00.231) 0:00:35.557 ********* 2026-03-10 00:46:44.003761 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-10 00:46:44.003767 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-10 00:46:44.003773 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-10 00:46:44.003785 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-10 00:46:44.003791 | orchestrator | 2026-03-10 00:46:44.003797 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003803 | orchestrator | Tuesday 10 March 2026 00:46:39 +0000 (0:00:00.887) 0:00:36.444 ********* 2026-03-10 00:46:44.003809 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003816 | orchestrator | 2026-03-10 00:46:44.003822 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003828 | orchestrator | Tuesday 10 March 2026 00:46:39 +0000 (0:00:00.192) 0:00:36.636 ********* 2026-03-10 00:46:44.003844 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003849 | orchestrator | 2026-03-10 00:46:44.003852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003856 | orchestrator | Tuesday 10 March 2026 00:46:40 +0000 (0:00:00.506) 0:00:37.143 ********* 2026-03-10 00:46:44.003886 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003894 | orchestrator | 2026-03-10 00:46:44.003900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:46:44.003907 | orchestrator | Tuesday 10 March 2026 00:46:40 +0000 (0:00:00.179) 0:00:37.322 ********* 2026-03-10 00:46:44.003914 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003920 | orchestrator | 2026-03-10 00:46:44.003951 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-10 00:46:44.003958 | orchestrator | Tuesday 10 March 2026 00:46:40 +0000 (0:00:00.193) 0:00:37.516 ********* 2026-03-10 00:46:44.003964 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.003970 | orchestrator | 2026-03-10 00:46:44.003976 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-10 00:46:44.003983 | orchestrator | Tuesday 10 March 2026 00:46:40 +0000 (0:00:00.117) 0:00:37.633 ********* 2026-03-10 00:46:44.003989 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'}}) 2026-03-10 00:46:44.003996 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '5a55caf6-84ae-542a-a466-02d3e6c6095e'}}) 2026-03-10 00:46:44.004002 | orchestrator | 2026-03-10 00:46:44.004008 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-10 00:46:44.004014 | orchestrator | Tuesday 10 March 2026 00:46:40 +0000 (0:00:00.172) 0:00:37.806 ********* 2026-03-10 00:46:44.004021 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'}) 2026-03-10 00:46:44.004028 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'}) 2026-03-10 00:46:44.004039 | orchestrator | 2026-03-10 00:46:44.004043 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-10 00:46:44.004047 | orchestrator | Tuesday 10 March 2026 00:46:42 +0000 (0:00:01.881) 0:00:39.687 ********* 2026-03-10 00:46:44.004051 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:44.004063 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:44.004075 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:44.004081 | orchestrator | 2026-03-10 00:46:44.004088 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-10 00:46:44.004094 | orchestrator | Tuesday 10 March 2026 00:46:42 +0000 (0:00:00.156) 0:00:39.844 ********* 2026-03-10 00:46:44.004136 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'}) 2026-03-10 00:46:44.004149 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'}) 2026-03-10 00:46:50.357360 | orchestrator | 2026-03-10 00:46:50.357473 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-10 00:46:50.357492 | orchestrator | Tuesday 10 March 2026 00:46:44 +0000 (0:00:01.329) 0:00:41.173 ********* 2026-03-10 00:46:50.357504 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:50.357517 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:50.357529 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.357540 | orchestrator | 2026-03-10 00:46:50.357552 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-10 00:46:50.357563 | orchestrator | Tuesday 10 March 2026 00:46:44 +0000 (0:00:00.178) 0:00:41.352 ********* 2026-03-10 00:46:50.357573 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.357584 | orchestrator | 2026-03-10 00:46:50.357595 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-10 00:46:50.357605 | orchestrator | Tuesday 10 March 2026 00:46:44 +0000 (0:00:00.152) 0:00:41.504 ********* 2026-03-10 00:46:50.357616 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:50.357627 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:50.357638 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.357649 | orchestrator | 2026-03-10 00:46:50.357660 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-10 00:46:50.357671 | orchestrator | Tuesday 10 March 2026 00:46:44 +0000 (0:00:00.186) 0:00:41.691 ********* 2026-03-10 00:46:50.357681 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.357692 | orchestrator | 2026-03-10 00:46:50.357703 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-10 00:46:50.357714 | orchestrator | Tuesday 10 March 2026 00:46:44 +0000 (0:00:00.139) 0:00:41.831 ********* 2026-03-10 00:46:50.357725 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:50.357736 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:50.357747 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.357757 | orchestrator | 2026-03-10 00:46:50.357768 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-10 00:46:50.357779 | orchestrator | Tuesday 10 March 2026 00:46:45 +0000 (0:00:00.417) 0:00:42.249 ********* 2026-03-10 00:46:50.357790 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.357816 | orchestrator | 2026-03-10 00:46:50.357829 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-10 00:46:50.357841 | orchestrator | Tuesday 10 March 2026 00:46:45 +0000 (0:00:00.159) 0:00:42.408 ********* 2026-03-10 00:46:50.357864 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:50.357900 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:50.357913 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.357926 | orchestrator | 2026-03-10 00:46:50.357939 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-10 00:46:50.357967 | orchestrator | Tuesday 10 March 2026 00:46:45 +0000 (0:00:00.155) 0:00:42.564 ********* 2026-03-10 00:46:50.357979 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:46:50.357990 | orchestrator | 2026-03-10 00:46:50.358001 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-10 00:46:50.358011 | orchestrator | Tuesday 10 March 2026 00:46:45 +0000 (0:00:00.145) 0:00:42.710 ********* 2026-03-10 00:46:50.358085 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:50.358096 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:50.358109 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.358128 | orchestrator | 2026-03-10 00:46:50.358146 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-10 00:46:50.358175 | orchestrator | Tuesday 10 March 2026 00:46:45 +0000 (0:00:00.150) 0:00:42.860 ********* 2026-03-10 00:46:50.358198 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:50.358214 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:50.358232 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.358249 | orchestrator | 2026-03-10 00:46:50.358267 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-10 00:46:50.358336 | orchestrator | Tuesday 10 March 2026 00:46:45 +0000 (0:00:00.149) 0:00:43.010 ********* 2026-03-10 00:46:50.358355 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:50.358372 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:50.358389 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.358405 | orchestrator | 2026-03-10 00:46:50.358421 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-10 00:46:50.358437 | orchestrator | Tuesday 10 March 2026 00:46:46 +0000 (0:00:00.179) 0:00:43.190 ********* 2026-03-10 00:46:50.358453 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.358470 | orchestrator | 2026-03-10 00:46:50.358488 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-10 00:46:50.358505 | orchestrator | Tuesday 10 March 2026 00:46:46 +0000 (0:00:00.149) 0:00:43.340 ********* 2026-03-10 00:46:50.358521 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.358538 | orchestrator | 2026-03-10 00:46:50.358555 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-10 00:46:50.358573 | orchestrator | Tuesday 10 March 2026 00:46:46 +0000 (0:00:00.143) 0:00:43.484 ********* 2026-03-10 00:46:50.358590 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.358609 | orchestrator | 2026-03-10 00:46:50.358629 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-10 00:46:50.358650 | orchestrator | Tuesday 10 March 2026 00:46:46 +0000 (0:00:00.158) 0:00:43.642 ********* 2026-03-10 00:46:50.358669 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 00:46:50.358686 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-10 00:46:50.358723 | orchestrator | } 2026-03-10 00:46:50.358740 | orchestrator | 2026-03-10 00:46:50.358759 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-10 00:46:50.358777 | orchestrator | Tuesday 10 March 2026 00:46:46 +0000 (0:00:00.187) 0:00:43.829 ********* 2026-03-10 00:46:50.358795 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 00:46:50.358814 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-10 00:46:50.358832 | orchestrator | } 2026-03-10 00:46:50.358850 | orchestrator | 2026-03-10 00:46:50.358874 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-10 00:46:50.358885 | orchestrator | Tuesday 10 March 2026 00:46:46 +0000 (0:00:00.166) 0:00:43.996 ********* 2026-03-10 00:46:50.358896 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 00:46:50.358906 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-10 00:46:50.358923 | orchestrator | } 2026-03-10 00:46:50.358942 | orchestrator | 2026-03-10 00:46:50.358958 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-10 00:46:50.358975 | orchestrator | Tuesday 10 March 2026 00:46:47 +0000 (0:00:00.580) 0:00:44.576 ********* 2026-03-10 00:46:50.358993 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:46:50.359010 | orchestrator | 2026-03-10 00:46:50.359027 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-10 00:46:50.359045 | orchestrator | Tuesday 10 March 2026 00:46:48 +0000 (0:00:00.574) 0:00:45.151 ********* 2026-03-10 00:46:50.359063 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:46:50.359080 | orchestrator | 2026-03-10 00:46:50.359099 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-10 00:46:50.359117 | orchestrator | Tuesday 10 March 2026 00:46:48 +0000 (0:00:00.603) 0:00:45.754 ********* 2026-03-10 00:46:50.359137 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:46:50.359156 | orchestrator | 2026-03-10 00:46:50.359174 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-10 00:46:50.359190 | orchestrator | Tuesday 10 March 2026 00:46:49 +0000 (0:00:00.512) 0:00:46.266 ********* 2026-03-10 00:46:50.359202 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:46:50.359212 | orchestrator | 2026-03-10 00:46:50.359223 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-10 00:46:50.359233 | orchestrator | Tuesday 10 March 2026 00:46:49 +0000 (0:00:00.157) 0:00:46.424 ********* 2026-03-10 00:46:50.359244 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.359254 | orchestrator | 2026-03-10 00:46:50.359265 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-10 00:46:50.359276 | orchestrator | Tuesday 10 March 2026 00:46:49 +0000 (0:00:00.126) 0:00:46.550 ********* 2026-03-10 00:46:50.359352 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.359363 | orchestrator | 2026-03-10 00:46:50.359373 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-10 00:46:50.359384 | orchestrator | Tuesday 10 March 2026 00:46:49 +0000 (0:00:00.122) 0:00:46.672 ********* 2026-03-10 00:46:50.359395 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 00:46:50.359405 | orchestrator |  "vgs_report": { 2026-03-10 00:46:50.359416 | orchestrator |  "vg": [] 2026-03-10 00:46:50.359427 | orchestrator |  } 2026-03-10 00:46:50.359438 | orchestrator | } 2026-03-10 00:46:50.359448 | orchestrator | 2026-03-10 00:46:50.359459 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-10 00:46:50.359470 | orchestrator | Tuesday 10 March 2026 00:46:49 +0000 (0:00:00.161) 0:00:46.834 ********* 2026-03-10 00:46:50.359481 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.359491 | orchestrator | 2026-03-10 00:46:50.359502 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-10 00:46:50.359513 | orchestrator | Tuesday 10 March 2026 00:46:49 +0000 (0:00:00.163) 0:00:46.997 ********* 2026-03-10 00:46:50.359523 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.359534 | orchestrator | 2026-03-10 00:46:50.359545 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-10 00:46:50.359567 | orchestrator | Tuesday 10 March 2026 00:46:50 +0000 (0:00:00.154) 0:00:47.152 ********* 2026-03-10 00:46:50.359578 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.359589 | orchestrator | 2026-03-10 00:46:50.359599 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-10 00:46:50.359610 | orchestrator | Tuesday 10 March 2026 00:46:50 +0000 (0:00:00.139) 0:00:47.292 ********* 2026-03-10 00:46:50.359621 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:50.359631 | orchestrator | 2026-03-10 00:46:50.359656 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-10 00:46:55.247031 | orchestrator | Tuesday 10 March 2026 00:46:50 +0000 (0:00:00.145) 0:00:47.437 ********* 2026-03-10 00:46:55.247126 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247138 | orchestrator | 2026-03-10 00:46:55.247145 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-10 00:46:55.247151 | orchestrator | Tuesday 10 March 2026 00:46:50 +0000 (0:00:00.376) 0:00:47.814 ********* 2026-03-10 00:46:55.247157 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247163 | orchestrator | 2026-03-10 00:46:55.247169 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-10 00:46:55.247175 | orchestrator | Tuesday 10 March 2026 00:46:50 +0000 (0:00:00.158) 0:00:47.972 ********* 2026-03-10 00:46:55.247181 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247187 | orchestrator | 2026-03-10 00:46:55.247194 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-10 00:46:55.247200 | orchestrator | Tuesday 10 March 2026 00:46:51 +0000 (0:00:00.176) 0:00:48.149 ********* 2026-03-10 00:46:55.247207 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247213 | orchestrator | 2026-03-10 00:46:55.247221 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-10 00:46:55.247225 | orchestrator | Tuesday 10 March 2026 00:46:51 +0000 (0:00:00.151) 0:00:48.300 ********* 2026-03-10 00:46:55.247229 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247233 | orchestrator | 2026-03-10 00:46:55.247236 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-10 00:46:55.247241 | orchestrator | Tuesday 10 March 2026 00:46:51 +0000 (0:00:00.149) 0:00:48.450 ********* 2026-03-10 00:46:55.247245 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247248 | orchestrator | 2026-03-10 00:46:55.247252 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-10 00:46:55.247256 | orchestrator | Tuesday 10 March 2026 00:46:51 +0000 (0:00:00.163) 0:00:48.613 ********* 2026-03-10 00:46:55.247260 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247263 | orchestrator | 2026-03-10 00:46:55.247267 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-10 00:46:55.247271 | orchestrator | Tuesday 10 March 2026 00:46:51 +0000 (0:00:00.154) 0:00:48.768 ********* 2026-03-10 00:46:55.247313 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247317 | orchestrator | 2026-03-10 00:46:55.247321 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-10 00:46:55.247324 | orchestrator | Tuesday 10 March 2026 00:46:51 +0000 (0:00:00.157) 0:00:48.925 ********* 2026-03-10 00:46:55.247328 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247332 | orchestrator | 2026-03-10 00:46:55.247336 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-10 00:46:55.247339 | orchestrator | Tuesday 10 March 2026 00:46:52 +0000 (0:00:00.204) 0:00:49.129 ********* 2026-03-10 00:46:55.247343 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247347 | orchestrator | 2026-03-10 00:46:55.247350 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-10 00:46:55.247354 | orchestrator | Tuesday 10 March 2026 00:46:52 +0000 (0:00:00.150) 0:00:49.280 ********* 2026-03-10 00:46:55.247359 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:55.247383 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:55.247387 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247390 | orchestrator | 2026-03-10 00:46:55.247394 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-10 00:46:55.247398 | orchestrator | Tuesday 10 March 2026 00:46:52 +0000 (0:00:00.142) 0:00:49.423 ********* 2026-03-10 00:46:55.247401 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:55.247405 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:55.247409 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247413 | orchestrator | 2026-03-10 00:46:55.247416 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-10 00:46:55.247420 | orchestrator | Tuesday 10 March 2026 00:46:52 +0000 (0:00:00.142) 0:00:49.566 ********* 2026-03-10 00:46:55.247424 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:55.247427 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:55.247431 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247435 | orchestrator | 2026-03-10 00:46:55.247438 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-10 00:46:55.247442 | orchestrator | Tuesday 10 March 2026 00:46:52 +0000 (0:00:00.311) 0:00:49.877 ********* 2026-03-10 00:46:55.247448 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:55.247454 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:55.247460 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247467 | orchestrator | 2026-03-10 00:46:55.247491 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-10 00:46:55.247508 | orchestrator | Tuesday 10 March 2026 00:46:52 +0000 (0:00:00.153) 0:00:50.030 ********* 2026-03-10 00:46:55.247522 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:55.247529 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:55.247535 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247540 | orchestrator | 2026-03-10 00:46:55.247546 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-10 00:46:55.247552 | orchestrator | Tuesday 10 March 2026 00:46:53 +0000 (0:00:00.165) 0:00:50.196 ********* 2026-03-10 00:46:55.247559 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:55.247566 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:55.247572 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247578 | orchestrator | 2026-03-10 00:46:55.247585 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-10 00:46:55.247592 | orchestrator | Tuesday 10 March 2026 00:46:53 +0000 (0:00:00.180) 0:00:50.376 ********* 2026-03-10 00:46:55.247598 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:55.247612 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:55.247619 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247625 | orchestrator | 2026-03-10 00:46:55.247629 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-10 00:46:55.247634 | orchestrator | Tuesday 10 March 2026 00:46:53 +0000 (0:00:00.204) 0:00:50.581 ********* 2026-03-10 00:46:55.247638 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:55.247643 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:55.247647 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247651 | orchestrator | 2026-03-10 00:46:55.247657 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-10 00:46:55.247663 | orchestrator | Tuesday 10 March 2026 00:46:53 +0000 (0:00:00.143) 0:00:50.724 ********* 2026-03-10 00:46:55.247670 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:46:55.247676 | orchestrator | 2026-03-10 00:46:55.247682 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-10 00:46:55.247689 | orchestrator | Tuesday 10 March 2026 00:46:54 +0000 (0:00:00.516) 0:00:51.241 ********* 2026-03-10 00:46:55.247698 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:46:55.247705 | orchestrator | 2026-03-10 00:46:55.247711 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-10 00:46:55.247717 | orchestrator | Tuesday 10 March 2026 00:46:54 +0000 (0:00:00.498) 0:00:51.739 ********* 2026-03-10 00:46:55.247723 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:46:55.247728 | orchestrator | 2026-03-10 00:46:55.247735 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-10 00:46:55.247741 | orchestrator | Tuesday 10 March 2026 00:46:54 +0000 (0:00:00.154) 0:00:51.894 ********* 2026-03-10 00:46:55.247747 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'vg_name': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'}) 2026-03-10 00:46:55.247755 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'vg_name': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'}) 2026-03-10 00:46:55.247761 | orchestrator | 2026-03-10 00:46:55.247767 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-10 00:46:55.247773 | orchestrator | Tuesday 10 March 2026 00:46:54 +0000 (0:00:00.186) 0:00:52.080 ********* 2026-03-10 00:46:55.247779 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:55.247785 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:46:55.247792 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:46:55.247797 | orchestrator | 2026-03-10 00:46:55.247804 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-10 00:46:55.247810 | orchestrator | Tuesday 10 March 2026 00:46:55 +0000 (0:00:00.171) 0:00:52.251 ********* 2026-03-10 00:46:55.247817 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:46:55.247830 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:47:01.969252 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:47:01.969429 | orchestrator | 2026-03-10 00:47:01.969443 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-10 00:47:01.969453 | orchestrator | Tuesday 10 March 2026 00:46:55 +0000 (0:00:00.174) 0:00:52.426 ********* 2026-03-10 00:47:01.969462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'})  2026-03-10 00:47:01.969472 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'})  2026-03-10 00:47:01.969480 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:47:01.969488 | orchestrator | 2026-03-10 00:47:01.969496 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-10 00:47:01.969504 | orchestrator | Tuesday 10 March 2026 00:46:55 +0000 (0:00:00.157) 0:00:52.583 ********* 2026-03-10 00:47:01.969512 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 00:47:01.969520 | orchestrator |  "lvm_report": { 2026-03-10 00:47:01.969528 | orchestrator |  "lv": [ 2026-03-10 00:47:01.969536 | orchestrator |  { 2026-03-10 00:47:01.969544 | orchestrator |  "lv_name": "osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e", 2026-03-10 00:47:01.969553 | orchestrator |  "vg_name": "ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e" 2026-03-10 00:47:01.969561 | orchestrator |  }, 2026-03-10 00:47:01.969568 | orchestrator |  { 2026-03-10 00:47:01.969576 | orchestrator |  "lv_name": "osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e", 2026-03-10 00:47:01.969584 | orchestrator |  "vg_name": "ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e" 2026-03-10 00:47:01.969592 | orchestrator |  } 2026-03-10 00:47:01.969599 | orchestrator |  ], 2026-03-10 00:47:01.969607 | orchestrator |  "pv": [ 2026-03-10 00:47:01.969615 | orchestrator |  { 2026-03-10 00:47:01.969623 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-10 00:47:01.969636 | orchestrator |  "vg_name": "ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e" 2026-03-10 00:47:01.969644 | orchestrator |  }, 2026-03-10 00:47:01.969652 | orchestrator |  { 2026-03-10 00:47:01.969660 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-10 00:47:01.969668 | orchestrator |  "vg_name": "ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e" 2026-03-10 00:47:01.969676 | orchestrator |  } 2026-03-10 00:47:01.969683 | orchestrator |  ] 2026-03-10 00:47:01.969691 | orchestrator |  } 2026-03-10 00:47:01.969699 | orchestrator | } 2026-03-10 00:47:01.969707 | orchestrator | 2026-03-10 00:47:01.969715 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-10 00:47:01.969723 | orchestrator | 2026-03-10 00:47:01.969731 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-10 00:47:01.969739 | orchestrator | Tuesday 10 March 2026 00:46:56 +0000 (0:00:00.557) 0:00:53.141 ********* 2026-03-10 00:47:01.969747 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-10 00:47:01.969756 | orchestrator | 2026-03-10 00:47:01.969766 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-10 00:47:01.969775 | orchestrator | Tuesday 10 March 2026 00:46:56 +0000 (0:00:00.268) 0:00:53.409 ********* 2026-03-10 00:47:01.969784 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:47:01.969794 | orchestrator | 2026-03-10 00:47:01.969803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.969812 | orchestrator | Tuesday 10 March 2026 00:46:56 +0000 (0:00:00.292) 0:00:53.702 ********* 2026-03-10 00:47:01.969821 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-10 00:47:01.969828 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-10 00:47:01.969836 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-10 00:47:01.969844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-10 00:47:01.969858 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-10 00:47:01.969866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-10 00:47:01.969873 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-10 00:47:01.969881 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-10 00:47:01.969889 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-10 00:47:01.969900 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-10 00:47:01.969908 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-10 00:47:01.969916 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-10 00:47:01.969924 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-10 00:47:01.969931 | orchestrator | 2026-03-10 00:47:01.969939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.969947 | orchestrator | Tuesday 10 March 2026 00:46:57 +0000 (0:00:00.469) 0:00:54.172 ********* 2026-03-10 00:47:01.969954 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:01.969962 | orchestrator | 2026-03-10 00:47:01.969970 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.969978 | orchestrator | Tuesday 10 March 2026 00:46:57 +0000 (0:00:00.236) 0:00:54.408 ********* 2026-03-10 00:47:01.969985 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:01.969993 | orchestrator | 2026-03-10 00:47:01.970001 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.970080 | orchestrator | Tuesday 10 March 2026 00:46:57 +0000 (0:00:00.204) 0:00:54.612 ********* 2026-03-10 00:47:01.970096 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:01.970110 | orchestrator | 2026-03-10 00:47:01.970123 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.970135 | orchestrator | Tuesday 10 March 2026 00:46:57 +0000 (0:00:00.209) 0:00:54.822 ********* 2026-03-10 00:47:01.970148 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:01.970162 | orchestrator | 2026-03-10 00:47:01.970175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.970187 | orchestrator | Tuesday 10 March 2026 00:46:57 +0000 (0:00:00.202) 0:00:55.025 ********* 2026-03-10 00:47:01.970201 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:01.970213 | orchestrator | 2026-03-10 00:47:01.970227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.970240 | orchestrator | Tuesday 10 March 2026 00:46:58 +0000 (0:00:00.695) 0:00:55.721 ********* 2026-03-10 00:47:01.970254 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:01.970287 | orchestrator | 2026-03-10 00:47:01.970300 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.970314 | orchestrator | Tuesday 10 March 2026 00:46:58 +0000 (0:00:00.223) 0:00:55.945 ********* 2026-03-10 00:47:01.970326 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:01.970334 | orchestrator | 2026-03-10 00:47:01.970342 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.970350 | orchestrator | Tuesday 10 March 2026 00:46:59 +0000 (0:00:00.225) 0:00:56.170 ********* 2026-03-10 00:47:01.970358 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:01.970366 | orchestrator | 2026-03-10 00:47:01.970373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.970381 | orchestrator | Tuesday 10 March 2026 00:46:59 +0000 (0:00:00.219) 0:00:56.390 ********* 2026-03-10 00:47:01.970389 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b) 2026-03-10 00:47:01.970404 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b) 2026-03-10 00:47:01.970419 | orchestrator | 2026-03-10 00:47:01.970427 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.970434 | orchestrator | Tuesday 10 March 2026 00:46:59 +0000 (0:00:00.458) 0:00:56.849 ********* 2026-03-10 00:47:01.970442 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_525599b5-6362-4aac-a0b3-94bd4cb39972) 2026-03-10 00:47:01.970449 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_525599b5-6362-4aac-a0b3-94bd4cb39972) 2026-03-10 00:47:01.970457 | orchestrator | 2026-03-10 00:47:01.970464 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.970472 | orchestrator | Tuesday 10 March 2026 00:47:00 +0000 (0:00:00.528) 0:00:57.378 ********* 2026-03-10 00:47:01.970480 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_885a647d-e739-4ea9-ae01-9c2ce04d6822) 2026-03-10 00:47:01.970488 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_885a647d-e739-4ea9-ae01-9c2ce04d6822) 2026-03-10 00:47:01.970495 | orchestrator | 2026-03-10 00:47:01.970503 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.970511 | orchestrator | Tuesday 10 March 2026 00:47:00 +0000 (0:00:00.473) 0:00:57.852 ********* 2026-03-10 00:47:01.970518 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_3e39970d-8644-42a9-a13b-932f32b0237f) 2026-03-10 00:47:01.970526 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_3e39970d-8644-42a9-a13b-932f32b0237f) 2026-03-10 00:47:01.970534 | orchestrator | 2026-03-10 00:47:01.970541 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-10 00:47:01.970549 | orchestrator | Tuesday 10 March 2026 00:47:01 +0000 (0:00:00.469) 0:00:58.321 ********* 2026-03-10 00:47:01.970557 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-10 00:47:01.970564 | orchestrator | 2026-03-10 00:47:01.970572 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:01.970579 | orchestrator | Tuesday 10 March 2026 00:47:01 +0000 (0:00:00.382) 0:00:58.704 ********* 2026-03-10 00:47:01.970587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-10 00:47:01.970595 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-10 00:47:01.970603 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-10 00:47:01.970611 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-10 00:47:01.970618 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-10 00:47:01.970626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-10 00:47:01.970633 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-10 00:47:01.970641 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-10 00:47:01.970649 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-10 00:47:01.970656 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-10 00:47:01.970664 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-10 00:47:01.970681 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-10 00:47:11.116890 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-10 00:47:11.116997 | orchestrator | 2026-03-10 00:47:11.117021 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:11.117042 | orchestrator | Tuesday 10 March 2026 00:47:02 +0000 (0:00:00.430) 0:00:59.135 ********* 2026-03-10 00:47:11.117087 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.117108 | orchestrator | 2026-03-10 00:47:11.117128 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:11.117147 | orchestrator | Tuesday 10 March 2026 00:47:02 +0000 (0:00:00.216) 0:00:59.351 ********* 2026-03-10 00:47:11.117167 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.117185 | orchestrator | 2026-03-10 00:47:11.117204 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:11.117223 | orchestrator | Tuesday 10 March 2026 00:47:02 +0000 (0:00:00.732) 0:01:00.083 ********* 2026-03-10 00:47:11.117241 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.117367 | orchestrator | 2026-03-10 00:47:11.117390 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:11.117409 | orchestrator | Tuesday 10 March 2026 00:47:03 +0000 (0:00:00.217) 0:01:00.301 ********* 2026-03-10 00:47:11.117427 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.117446 | orchestrator | 2026-03-10 00:47:11.117464 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:11.117483 | orchestrator | Tuesday 10 March 2026 00:47:03 +0000 (0:00:00.216) 0:01:00.517 ********* 2026-03-10 00:47:11.117501 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.117519 | orchestrator | 2026-03-10 00:47:11.117536 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:11.117553 | orchestrator | Tuesday 10 March 2026 00:47:03 +0000 (0:00:00.216) 0:01:00.734 ********* 2026-03-10 00:47:11.117569 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.117585 | orchestrator | 2026-03-10 00:47:11.117619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:11.117636 | orchestrator | Tuesday 10 March 2026 00:47:03 +0000 (0:00:00.215) 0:01:00.950 ********* 2026-03-10 00:47:11.117652 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.117670 | orchestrator | 2026-03-10 00:47:11.117698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:11.117717 | orchestrator | Tuesday 10 March 2026 00:47:04 +0000 (0:00:00.213) 0:01:01.163 ********* 2026-03-10 00:47:11.117751 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.117773 | orchestrator | 2026-03-10 00:47:11.117811 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:11.117843 | orchestrator | Tuesday 10 March 2026 00:47:04 +0000 (0:00:00.209) 0:01:01.372 ********* 2026-03-10 00:47:11.117873 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-10 00:47:11.117910 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-10 00:47:11.117938 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-10 00:47:11.117960 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-10 00:47:11.117976 | orchestrator | 2026-03-10 00:47:11.117992 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:11.118009 | orchestrator | Tuesday 10 March 2026 00:47:04 +0000 (0:00:00.667) 0:01:02.040 ********* 2026-03-10 00:47:11.118124 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.118143 | orchestrator | 2026-03-10 00:47:11.118159 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:11.118178 | orchestrator | Tuesday 10 March 2026 00:47:05 +0000 (0:00:00.244) 0:01:02.285 ********* 2026-03-10 00:47:11.118195 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.118213 | orchestrator | 2026-03-10 00:47:11.118232 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:11.118250 | orchestrator | Tuesday 10 March 2026 00:47:05 +0000 (0:00:00.211) 0:01:02.497 ********* 2026-03-10 00:47:11.118348 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.118366 | orchestrator | 2026-03-10 00:47:11.118382 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-10 00:47:11.118399 | orchestrator | Tuesday 10 March 2026 00:47:05 +0000 (0:00:00.196) 0:01:02.694 ********* 2026-03-10 00:47:11.118432 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.118449 | orchestrator | 2026-03-10 00:47:11.118466 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-10 00:47:11.118482 | orchestrator | Tuesday 10 March 2026 00:47:05 +0000 (0:00:00.232) 0:01:02.926 ********* 2026-03-10 00:47:11.118500 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.118517 | orchestrator | 2026-03-10 00:47:11.118534 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-10 00:47:11.118551 | orchestrator | Tuesday 10 March 2026 00:47:06 +0000 (0:00:00.392) 0:01:03.319 ********* 2026-03-10 00:47:11.118568 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '276dc5cf-0fff-57f4-b280-c3cda8556bee'}}) 2026-03-10 00:47:11.118586 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '1c4f45a1-f837-5281-b6b5-75662d68eedd'}}) 2026-03-10 00:47:11.118603 | orchestrator | 2026-03-10 00:47:11.118620 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-10 00:47:11.118637 | orchestrator | Tuesday 10 March 2026 00:47:06 +0000 (0:00:00.244) 0:01:03.564 ********* 2026-03-10 00:47:11.118655 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'}) 2026-03-10 00:47:11.118673 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'}) 2026-03-10 00:47:11.118690 | orchestrator | 2026-03-10 00:47:11.118707 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-10 00:47:11.118746 | orchestrator | Tuesday 10 March 2026 00:47:08 +0000 (0:00:01.853) 0:01:05.418 ********* 2026-03-10 00:47:11.118764 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:11.118782 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:11.118799 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.118816 | orchestrator | 2026-03-10 00:47:11.118833 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-10 00:47:11.118849 | orchestrator | Tuesday 10 March 2026 00:47:08 +0000 (0:00:00.159) 0:01:05.577 ********* 2026-03-10 00:47:11.118866 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'}) 2026-03-10 00:47:11.118883 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'}) 2026-03-10 00:47:11.118899 | orchestrator | 2026-03-10 00:47:11.118916 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-10 00:47:11.118933 | orchestrator | Tuesday 10 March 2026 00:47:09 +0000 (0:00:01.291) 0:01:06.868 ********* 2026-03-10 00:47:11.118950 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:11.118966 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:11.118982 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.118999 | orchestrator | 2026-03-10 00:47:11.119016 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-10 00:47:11.119032 | orchestrator | Tuesday 10 March 2026 00:47:09 +0000 (0:00:00.133) 0:01:07.002 ********* 2026-03-10 00:47:11.119049 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.119066 | orchestrator | 2026-03-10 00:47:11.119082 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-10 00:47:11.119099 | orchestrator | Tuesday 10 March 2026 00:47:10 +0000 (0:00:00.138) 0:01:07.141 ********* 2026-03-10 00:47:11.119126 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:11.119143 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:11.119160 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.119176 | orchestrator | 2026-03-10 00:47:11.119193 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-10 00:47:11.119211 | orchestrator | Tuesday 10 March 2026 00:47:10 +0000 (0:00:00.159) 0:01:07.301 ********* 2026-03-10 00:47:11.119227 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.119244 | orchestrator | 2026-03-10 00:47:11.119282 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-10 00:47:11.119300 | orchestrator | Tuesday 10 March 2026 00:47:10 +0000 (0:00:00.125) 0:01:07.427 ********* 2026-03-10 00:47:11.119316 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:11.119333 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:11.119351 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.119368 | orchestrator | 2026-03-10 00:47:11.119385 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-10 00:47:11.119414 | orchestrator | Tuesday 10 March 2026 00:47:10 +0000 (0:00:00.139) 0:01:07.567 ********* 2026-03-10 00:47:11.119433 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.119450 | orchestrator | 2026-03-10 00:47:11.119467 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-10 00:47:11.119485 | orchestrator | Tuesday 10 March 2026 00:47:10 +0000 (0:00:00.133) 0:01:07.701 ********* 2026-03-10 00:47:11.119504 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:11.119523 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:11.119541 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:11.119559 | orchestrator | 2026-03-10 00:47:11.119576 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-10 00:47:11.119593 | orchestrator | Tuesday 10 March 2026 00:47:10 +0000 (0:00:00.145) 0:01:07.846 ********* 2026-03-10 00:47:11.119610 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:47:11.119628 | orchestrator | 2026-03-10 00:47:11.119645 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-10 00:47:11.119662 | orchestrator | Tuesday 10 March 2026 00:47:11 +0000 (0:00:00.288) 0:01:08.135 ********* 2026-03-10 00:47:11.119691 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:17.242992 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:17.243083 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243092 | orchestrator | 2026-03-10 00:47:17.243098 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-10 00:47:17.243104 | orchestrator | Tuesday 10 March 2026 00:47:11 +0000 (0:00:00.155) 0:01:08.291 ********* 2026-03-10 00:47:17.243109 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:17.243115 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:17.243135 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243140 | orchestrator | 2026-03-10 00:47:17.243145 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-10 00:47:17.243150 | orchestrator | Tuesday 10 March 2026 00:47:11 +0000 (0:00:00.138) 0:01:08.430 ********* 2026-03-10 00:47:17.243154 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:17.243159 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:17.243164 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243168 | orchestrator | 2026-03-10 00:47:17.243173 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-10 00:47:17.243188 | orchestrator | Tuesday 10 March 2026 00:47:11 +0000 (0:00:00.134) 0:01:08.564 ********* 2026-03-10 00:47:17.243193 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243197 | orchestrator | 2026-03-10 00:47:17.243202 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-10 00:47:17.243206 | orchestrator | Tuesday 10 March 2026 00:47:11 +0000 (0:00:00.133) 0:01:08.698 ********* 2026-03-10 00:47:17.243211 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243215 | orchestrator | 2026-03-10 00:47:17.243220 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-10 00:47:17.243224 | orchestrator | Tuesday 10 March 2026 00:47:11 +0000 (0:00:00.133) 0:01:08.831 ********* 2026-03-10 00:47:17.243229 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243233 | orchestrator | 2026-03-10 00:47:17.243237 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-10 00:47:17.243242 | orchestrator | Tuesday 10 March 2026 00:47:11 +0000 (0:00:00.138) 0:01:08.969 ********* 2026-03-10 00:47:17.243246 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 00:47:17.243273 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-10 00:47:17.243279 | orchestrator | } 2026-03-10 00:47:17.243284 | orchestrator | 2026-03-10 00:47:17.243289 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-10 00:47:17.243293 | orchestrator | Tuesday 10 March 2026 00:47:12 +0000 (0:00:00.142) 0:01:09.112 ********* 2026-03-10 00:47:17.243298 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 00:47:17.243302 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-10 00:47:17.243307 | orchestrator | } 2026-03-10 00:47:17.243311 | orchestrator | 2026-03-10 00:47:17.243316 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-10 00:47:17.243320 | orchestrator | Tuesday 10 March 2026 00:47:12 +0000 (0:00:00.140) 0:01:09.253 ********* 2026-03-10 00:47:17.243325 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 00:47:17.243329 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-10 00:47:17.243334 | orchestrator | } 2026-03-10 00:47:17.243338 | orchestrator | 2026-03-10 00:47:17.243343 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-10 00:47:17.243347 | orchestrator | Tuesday 10 March 2026 00:47:12 +0000 (0:00:00.130) 0:01:09.384 ********* 2026-03-10 00:47:17.243352 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:47:17.243356 | orchestrator | 2026-03-10 00:47:17.243361 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-10 00:47:17.243365 | orchestrator | Tuesday 10 March 2026 00:47:12 +0000 (0:00:00.475) 0:01:09.859 ********* 2026-03-10 00:47:17.243370 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:47:17.243374 | orchestrator | 2026-03-10 00:47:17.243379 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-10 00:47:17.243383 | orchestrator | Tuesday 10 March 2026 00:47:13 +0000 (0:00:00.484) 0:01:10.343 ********* 2026-03-10 00:47:17.243387 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:47:17.243397 | orchestrator | 2026-03-10 00:47:17.243401 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-10 00:47:17.243405 | orchestrator | Tuesday 10 March 2026 00:47:13 +0000 (0:00:00.636) 0:01:10.979 ********* 2026-03-10 00:47:17.243410 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:47:17.243414 | orchestrator | 2026-03-10 00:47:17.243419 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-10 00:47:17.243423 | orchestrator | Tuesday 10 March 2026 00:47:14 +0000 (0:00:00.146) 0:01:11.126 ********* 2026-03-10 00:47:17.243428 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243432 | orchestrator | 2026-03-10 00:47:17.243437 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-10 00:47:17.243441 | orchestrator | Tuesday 10 March 2026 00:47:14 +0000 (0:00:00.128) 0:01:11.254 ********* 2026-03-10 00:47:17.243446 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243450 | orchestrator | 2026-03-10 00:47:17.243454 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-10 00:47:17.243459 | orchestrator | Tuesday 10 March 2026 00:47:14 +0000 (0:00:00.156) 0:01:11.411 ********* 2026-03-10 00:47:17.243463 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 00:47:17.243468 | orchestrator |  "vgs_report": { 2026-03-10 00:47:17.243473 | orchestrator |  "vg": [] 2026-03-10 00:47:17.243489 | orchestrator |  } 2026-03-10 00:47:17.243494 | orchestrator | } 2026-03-10 00:47:17.243499 | orchestrator | 2026-03-10 00:47:17.243503 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-10 00:47:17.243507 | orchestrator | Tuesday 10 March 2026 00:47:14 +0000 (0:00:00.175) 0:01:11.587 ********* 2026-03-10 00:47:17.243512 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243516 | orchestrator | 2026-03-10 00:47:17.243521 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-10 00:47:17.243525 | orchestrator | Tuesday 10 March 2026 00:47:14 +0000 (0:00:00.136) 0:01:11.724 ********* 2026-03-10 00:47:17.243530 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243534 | orchestrator | 2026-03-10 00:47:17.243539 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-10 00:47:17.243543 | orchestrator | Tuesday 10 March 2026 00:47:14 +0000 (0:00:00.166) 0:01:11.890 ********* 2026-03-10 00:47:17.243549 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243554 | orchestrator | 2026-03-10 00:47:17.243559 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-10 00:47:17.243564 | orchestrator | Tuesday 10 March 2026 00:47:14 +0000 (0:00:00.131) 0:01:12.022 ********* 2026-03-10 00:47:17.243569 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243575 | orchestrator | 2026-03-10 00:47:17.243580 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-10 00:47:17.243585 | orchestrator | Tuesday 10 March 2026 00:47:15 +0000 (0:00:00.136) 0:01:12.159 ********* 2026-03-10 00:47:17.243590 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243595 | orchestrator | 2026-03-10 00:47:17.243601 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-10 00:47:17.243606 | orchestrator | Tuesday 10 March 2026 00:47:15 +0000 (0:00:00.147) 0:01:12.307 ********* 2026-03-10 00:47:17.243612 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243617 | orchestrator | 2026-03-10 00:47:17.243622 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-10 00:47:17.243631 | orchestrator | Tuesday 10 March 2026 00:47:15 +0000 (0:00:00.142) 0:01:12.449 ********* 2026-03-10 00:47:17.243636 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243641 | orchestrator | 2026-03-10 00:47:17.243646 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-10 00:47:17.243652 | orchestrator | Tuesday 10 March 2026 00:47:15 +0000 (0:00:00.148) 0:01:12.597 ********* 2026-03-10 00:47:17.243657 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243662 | orchestrator | 2026-03-10 00:47:17.243667 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-10 00:47:17.243678 | orchestrator | Tuesday 10 March 2026 00:47:15 +0000 (0:00:00.416) 0:01:13.014 ********* 2026-03-10 00:47:17.243683 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243688 | orchestrator | 2026-03-10 00:47:17.243694 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-10 00:47:17.243699 | orchestrator | Tuesday 10 March 2026 00:47:16 +0000 (0:00:00.141) 0:01:13.155 ********* 2026-03-10 00:47:17.243704 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243709 | orchestrator | 2026-03-10 00:47:17.243715 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-10 00:47:17.243720 | orchestrator | Tuesday 10 March 2026 00:47:16 +0000 (0:00:00.159) 0:01:13.314 ********* 2026-03-10 00:47:17.243725 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243730 | orchestrator | 2026-03-10 00:47:17.243735 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-10 00:47:17.243741 | orchestrator | Tuesday 10 March 2026 00:47:16 +0000 (0:00:00.150) 0:01:13.465 ********* 2026-03-10 00:47:17.243746 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243751 | orchestrator | 2026-03-10 00:47:17.243756 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-10 00:47:17.243761 | orchestrator | Tuesday 10 March 2026 00:47:16 +0000 (0:00:00.140) 0:01:13.606 ********* 2026-03-10 00:47:17.243766 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243772 | orchestrator | 2026-03-10 00:47:17.243777 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-10 00:47:17.243782 | orchestrator | Tuesday 10 March 2026 00:47:16 +0000 (0:00:00.159) 0:01:13.765 ********* 2026-03-10 00:47:17.243788 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243793 | orchestrator | 2026-03-10 00:47:17.243798 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-10 00:47:17.243803 | orchestrator | Tuesday 10 March 2026 00:47:16 +0000 (0:00:00.159) 0:01:13.924 ********* 2026-03-10 00:47:17.243808 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:17.243814 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:17.243819 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243824 | orchestrator | 2026-03-10 00:47:17.243829 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-10 00:47:17.243835 | orchestrator | Tuesday 10 March 2026 00:47:17 +0000 (0:00:00.160) 0:01:14.085 ********* 2026-03-10 00:47:17.243840 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:17.243845 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:17.243850 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:17.243855 | orchestrator | 2026-03-10 00:47:17.243860 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-10 00:47:17.243866 | orchestrator | Tuesday 10 March 2026 00:47:17 +0000 (0:00:00.170) 0:01:14.256 ********* 2026-03-10 00:47:17.243874 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:20.484224 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:20.484364 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:20.484375 | orchestrator | 2026-03-10 00:47:20.484383 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-10 00:47:20.484389 | orchestrator | Tuesday 10 March 2026 00:47:17 +0000 (0:00:00.168) 0:01:14.424 ********* 2026-03-10 00:47:20.484414 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:20.484420 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:20.484425 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:20.484430 | orchestrator | 2026-03-10 00:47:20.484435 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-10 00:47:20.484443 | orchestrator | Tuesday 10 March 2026 00:47:17 +0000 (0:00:00.158) 0:01:14.582 ********* 2026-03-10 00:47:20.484451 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:20.484475 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:20.484488 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:20.484496 | orchestrator | 2026-03-10 00:47:20.484504 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-10 00:47:20.484512 | orchestrator | Tuesday 10 March 2026 00:47:17 +0000 (0:00:00.162) 0:01:14.745 ********* 2026-03-10 00:47:20.484519 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:20.484527 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:20.484535 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:20.484544 | orchestrator | 2026-03-10 00:47:20.484552 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-10 00:47:20.484560 | orchestrator | Tuesday 10 March 2026 00:47:18 +0000 (0:00:00.425) 0:01:15.171 ********* 2026-03-10 00:47:20.484568 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:20.484577 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:20.484585 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:20.484594 | orchestrator | 2026-03-10 00:47:20.484602 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-10 00:47:20.484607 | orchestrator | Tuesday 10 March 2026 00:47:18 +0000 (0:00:00.180) 0:01:15.351 ********* 2026-03-10 00:47:20.484612 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:20.484617 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:20.484622 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:20.484627 | orchestrator | 2026-03-10 00:47:20.484632 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-10 00:47:20.484637 | orchestrator | Tuesday 10 March 2026 00:47:18 +0000 (0:00:00.154) 0:01:15.505 ********* 2026-03-10 00:47:20.484642 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:47:20.484648 | orchestrator | 2026-03-10 00:47:20.484653 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-10 00:47:20.484658 | orchestrator | Tuesday 10 March 2026 00:47:18 +0000 (0:00:00.490) 0:01:15.995 ********* 2026-03-10 00:47:20.484663 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:47:20.484668 | orchestrator | 2026-03-10 00:47:20.484673 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-10 00:47:20.484685 | orchestrator | Tuesday 10 March 2026 00:47:19 +0000 (0:00:00.490) 0:01:16.485 ********* 2026-03-10 00:47:20.484690 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:47:20.484695 | orchestrator | 2026-03-10 00:47:20.484700 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-10 00:47:20.484705 | orchestrator | Tuesday 10 March 2026 00:47:19 +0000 (0:00:00.175) 0:01:16.661 ********* 2026-03-10 00:47:20.484710 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'vg_name': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'}) 2026-03-10 00:47:20.484717 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'vg_name': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'}) 2026-03-10 00:47:20.484722 | orchestrator | 2026-03-10 00:47:20.484727 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-10 00:47:20.484732 | orchestrator | Tuesday 10 March 2026 00:47:19 +0000 (0:00:00.197) 0:01:16.858 ********* 2026-03-10 00:47:20.484753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:20.484759 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:20.484765 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:20.484771 | orchestrator | 2026-03-10 00:47:20.484777 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-10 00:47:20.484783 | orchestrator | Tuesday 10 March 2026 00:47:19 +0000 (0:00:00.165) 0:01:17.024 ********* 2026-03-10 00:47:20.484789 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:20.484794 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:20.484800 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:20.484806 | orchestrator | 2026-03-10 00:47:20.484811 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-10 00:47:20.484817 | orchestrator | Tuesday 10 March 2026 00:47:20 +0000 (0:00:00.203) 0:01:17.227 ********* 2026-03-10 00:47:20.484823 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'})  2026-03-10 00:47:20.484830 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'})  2026-03-10 00:47:20.484835 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:20.484841 | orchestrator | 2026-03-10 00:47:20.484848 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-10 00:47:20.484853 | orchestrator | Tuesday 10 March 2026 00:47:20 +0000 (0:00:00.157) 0:01:17.385 ********* 2026-03-10 00:47:20.484860 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 00:47:20.484865 | orchestrator |  "lvm_report": { 2026-03-10 00:47:20.484871 | orchestrator |  "lv": [ 2026-03-10 00:47:20.484877 | orchestrator |  { 2026-03-10 00:47:20.484883 | orchestrator |  "lv_name": "osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd", 2026-03-10 00:47:20.484889 | orchestrator |  "vg_name": "ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd" 2026-03-10 00:47:20.484895 | orchestrator |  }, 2026-03-10 00:47:20.484901 | orchestrator |  { 2026-03-10 00:47:20.484907 | orchestrator |  "lv_name": "osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee", 2026-03-10 00:47:20.484913 | orchestrator |  "vg_name": "ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee" 2026-03-10 00:47:20.484918 | orchestrator |  } 2026-03-10 00:47:20.484924 | orchestrator |  ], 2026-03-10 00:47:20.484929 | orchestrator |  "pv": [ 2026-03-10 00:47:20.484938 | orchestrator |  { 2026-03-10 00:47:20.484943 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-10 00:47:20.484950 | orchestrator |  "vg_name": "ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee" 2026-03-10 00:47:20.484958 | orchestrator |  }, 2026-03-10 00:47:20.484966 | orchestrator |  { 2026-03-10 00:47:20.484975 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-10 00:47:20.484982 | orchestrator |  "vg_name": "ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd" 2026-03-10 00:47:20.484989 | orchestrator |  } 2026-03-10 00:47:20.484996 | orchestrator |  ] 2026-03-10 00:47:20.485004 | orchestrator |  } 2026-03-10 00:47:20.485012 | orchestrator | } 2026-03-10 00:47:20.485020 | orchestrator | 2026-03-10 00:47:20.485027 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:47:20.485035 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-10 00:47:20.485043 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-10 00:47:20.485050 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-10 00:47:20.485057 | orchestrator | 2026-03-10 00:47:20.485065 | orchestrator | 2026-03-10 00:47:20.485072 | orchestrator | 2026-03-10 00:47:20.485080 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:47:20.485088 | orchestrator | Tuesday 10 March 2026 00:47:20 +0000 (0:00:00.176) 0:01:17.561 ********* 2026-03-10 00:47:20.485096 | orchestrator | =============================================================================== 2026-03-10 00:47:20.485104 | orchestrator | Create block VGs -------------------------------------------------------- 5.83s 2026-03-10 00:47:20.485111 | orchestrator | Create block LVs -------------------------------------------------------- 4.15s 2026-03-10 00:47:20.485118 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.77s 2026-03-10 00:47:20.485126 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.77s 2026-03-10 00:47:20.485144 | orchestrator | Add known partitions to the list of available block devices ------------- 1.67s 2026-03-10 00:47:20.485152 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.66s 2026-03-10 00:47:20.485160 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.53s 2026-03-10 00:47:20.485168 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.53s 2026-03-10 00:47:20.485183 | orchestrator | Add known links to the list of available block devices ------------------ 1.47s 2026-03-10 00:47:20.986449 | orchestrator | Add known partitions to the list of available block devices ------------- 1.32s 2026-03-10 00:47:20.987462 | orchestrator | Add known links to the list of available block devices ------------------ 1.28s 2026-03-10 00:47:20.987522 | orchestrator | Print LVM report data --------------------------------------------------- 1.03s 2026-03-10 00:47:20.987533 | orchestrator | Add known links to the list of available block devices ------------------ 0.99s 2026-03-10 00:47:20.987542 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-03-10 00:47:20.987550 | orchestrator | Print number of OSDs wanted per DB+WAL VG ------------------------------- 0.88s 2026-03-10 00:47:20.987568 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.85s 2026-03-10 00:47:20.987576 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.83s 2026-03-10 00:47:20.987584 | orchestrator | Get initial list of available block devices ----------------------------- 0.79s 2026-03-10 00:47:20.987592 | orchestrator | Print 'Create WAL LVs for ceph_db_wal_devices' -------------------------- 0.78s 2026-03-10 00:47:20.987600 | orchestrator | Print 'Create WAL VGs' -------------------------------------------------- 0.73s 2026-03-10 00:47:33.878333 | orchestrator | 2026-03-10 00:47:33 | INFO  | Prepare task for execution of facts. 2026-03-10 00:47:33.953705 | orchestrator | 2026-03-10 00:47:33 | INFO  | Task 5f555067-0812-43c2-82fa-d594918f3d89 (facts) was prepared for execution. 2026-03-10 00:47:33.953830 | orchestrator | 2026-03-10 00:47:33 | INFO  | It takes a moment until task 5f555067-0812-43c2-82fa-d594918f3d89 (facts) has been started and output is visible here. 2026-03-10 00:47:48.506275 | orchestrator | 2026-03-10 00:47:48.506354 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-10 00:47:48.506362 | orchestrator | 2026-03-10 00:47:48.506367 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-10 00:47:48.506372 | orchestrator | Tuesday 10 March 2026 00:47:39 +0000 (0:00:00.370) 0:00:00.370 ********* 2026-03-10 00:47:48.506379 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:47:48.506387 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:47:48.506393 | orchestrator | ok: [testbed-manager] 2026-03-10 00:47:48.506399 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:47:48.506406 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:47:48.506412 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:47:48.506421 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:47:48.506427 | orchestrator | 2026-03-10 00:47:48.506433 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-10 00:47:48.506439 | orchestrator | Tuesday 10 March 2026 00:47:40 +0000 (0:00:01.209) 0:00:01.580 ********* 2026-03-10 00:47:48.506447 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:47:48.506454 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:47:48.506463 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:47:48.506471 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:47:48.506478 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:47:48.506484 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:47:48.506491 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:48.506497 | orchestrator | 2026-03-10 00:47:48.506503 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-10 00:47:48.506510 | orchestrator | 2026-03-10 00:47:48.506517 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-10 00:47:48.506524 | orchestrator | Tuesday 10 March 2026 00:47:42 +0000 (0:00:01.598) 0:00:03.178 ********* 2026-03-10 00:47:48.506530 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:47:48.506536 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:47:48.506542 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:47:48.506549 | orchestrator | ok: [testbed-manager] 2026-03-10 00:47:48.506555 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:47:48.506562 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:47:48.506568 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:47:48.506575 | orchestrator | 2026-03-10 00:47:48.506580 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-10 00:47:48.506584 | orchestrator | 2026-03-10 00:47:48.506589 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-10 00:47:48.506593 | orchestrator | Tuesday 10 March 2026 00:47:47 +0000 (0:00:05.140) 0:00:08.318 ********* 2026-03-10 00:47:48.506597 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:47:48.506601 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:47:48.506605 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:47:48.506609 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:47:48.506613 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:47:48.506617 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:47:48.506620 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:47:48.506624 | orchestrator | 2026-03-10 00:47:48.506628 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:47:48.506632 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:47:48.506638 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:47:48.506663 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:47:48.506667 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:47:48.506671 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:47:48.506675 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:47:48.506678 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:47:48.506682 | orchestrator | 2026-03-10 00:47:48.506686 | orchestrator | 2026-03-10 00:47:48.506690 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:47:48.506694 | orchestrator | Tuesday 10 March 2026 00:47:47 +0000 (0:00:00.610) 0:00:08.929 ********* 2026-03-10 00:47:48.506698 | orchestrator | =============================================================================== 2026-03-10 00:47:48.506702 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.14s 2026-03-10 00:47:48.506706 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.60s 2026-03-10 00:47:48.506709 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.21s 2026-03-10 00:47:48.506713 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.61s 2026-03-10 00:48:01.345984 | orchestrator | 2026-03-10 00:48:01 | INFO  | Prepare task for execution of frr. 2026-03-10 00:48:01.425891 | orchestrator | 2026-03-10 00:48:01 | INFO  | Task 852d974b-698f-4b8d-9d00-e0390680ab40 (frr) was prepared for execution. 2026-03-10 00:48:01.425995 | orchestrator | 2026-03-10 00:48:01 | INFO  | It takes a moment until task 852d974b-698f-4b8d-9d00-e0390680ab40 (frr) has been started and output is visible here. 2026-03-10 00:48:34.974767 | orchestrator | 2026-03-10 00:48:34.974842 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-10 00:48:34.974853 | orchestrator | 2026-03-10 00:48:34.974860 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-10 00:48:34.974867 | orchestrator | Tuesday 10 March 2026 00:48:06 +0000 (0:00:00.261) 0:00:00.262 ********* 2026-03-10 00:48:34.974872 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-10 00:48:34.974878 | orchestrator | 2026-03-10 00:48:34.974882 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-10 00:48:34.974885 | orchestrator | Tuesday 10 March 2026 00:48:06 +0000 (0:00:00.259) 0:00:00.521 ********* 2026-03-10 00:48:34.974889 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:34.974894 | orchestrator | 2026-03-10 00:48:34.974897 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-10 00:48:34.974901 | orchestrator | Tuesday 10 March 2026 00:48:08 +0000 (0:00:01.527) 0:00:02.048 ********* 2026-03-10 00:48:34.974905 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:34.974911 | orchestrator | 2026-03-10 00:48:34.974917 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-10 00:48:34.974924 | orchestrator | Tuesday 10 March 2026 00:48:20 +0000 (0:00:12.519) 0:00:14.567 ********* 2026-03-10 00:48:34.974930 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:34.974937 | orchestrator | 2026-03-10 00:48:34.974944 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-10 00:48:34.974950 | orchestrator | Tuesday 10 March 2026 00:48:22 +0000 (0:00:01.143) 0:00:15.711 ********* 2026-03-10 00:48:34.974957 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:34.974975 | orchestrator | 2026-03-10 00:48:34.974981 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-10 00:48:34.974988 | orchestrator | Tuesday 10 March 2026 00:48:23 +0000 (0:00:00.964) 0:00:16.675 ********* 2026-03-10 00:48:34.974995 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:34.975001 | orchestrator | 2026-03-10 00:48:34.975007 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-10 00:48:34.975013 | orchestrator | Tuesday 10 March 2026 00:48:24 +0000 (0:00:01.517) 0:00:18.193 ********* 2026-03-10 00:48:34.975019 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:48:34.975026 | orchestrator | 2026-03-10 00:48:34.975032 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-10 00:48:34.975038 | orchestrator | Tuesday 10 March 2026 00:48:24 +0000 (0:00:00.183) 0:00:18.376 ********* 2026-03-10 00:48:34.975045 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:48:34.975051 | orchestrator | 2026-03-10 00:48:34.975057 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-10 00:48:34.975063 | orchestrator | Tuesday 10 March 2026 00:48:24 +0000 (0:00:00.164) 0:00:18.541 ********* 2026-03-10 00:48:34.975069 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:48:34.975076 | orchestrator | 2026-03-10 00:48:34.975082 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-10 00:48:34.975089 | orchestrator | Tuesday 10 March 2026 00:48:25 +0000 (0:00:00.170) 0:00:18.711 ********* 2026-03-10 00:48:34.975095 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:48:34.975101 | orchestrator | 2026-03-10 00:48:34.975107 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-10 00:48:34.975113 | orchestrator | Tuesday 10 March 2026 00:48:25 +0000 (0:00:00.157) 0:00:18.869 ********* 2026-03-10 00:48:34.975120 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:48:34.975126 | orchestrator | 2026-03-10 00:48:34.975133 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-10 00:48:34.975139 | orchestrator | Tuesday 10 March 2026 00:48:25 +0000 (0:00:00.223) 0:00:19.092 ********* 2026-03-10 00:48:34.975145 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:34.975152 | orchestrator | 2026-03-10 00:48:34.975159 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-10 00:48:34.975165 | orchestrator | Tuesday 10 March 2026 00:48:26 +0000 (0:00:01.254) 0:00:20.347 ********* 2026-03-10 00:48:34.975171 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-10 00:48:34.975203 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-10 00:48:34.975210 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-10 00:48:34.975216 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-10 00:48:34.975223 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-10 00:48:34.975229 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-10 00:48:34.975236 | orchestrator | 2026-03-10 00:48:34.975242 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-10 00:48:34.975248 | orchestrator | Tuesday 10 March 2026 00:48:31 +0000 (0:00:04.698) 0:00:25.045 ********* 2026-03-10 00:48:34.975254 | orchestrator | ok: [testbed-manager] 2026-03-10 00:48:34.975262 | orchestrator | 2026-03-10 00:48:34.975266 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-10 00:48:34.975269 | orchestrator | Tuesday 10 March 2026 00:48:33 +0000 (0:00:01.774) 0:00:26.820 ********* 2026-03-10 00:48:34.975273 | orchestrator | changed: [testbed-manager] 2026-03-10 00:48:34.975277 | orchestrator | 2026-03-10 00:48:34.975281 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:48:34.975290 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 00:48:34.975294 | orchestrator | 2026-03-10 00:48:34.975298 | orchestrator | 2026-03-10 00:48:34.975316 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:48:34.975320 | orchestrator | Tuesday 10 March 2026 00:48:34 +0000 (0:00:01.493) 0:00:28.313 ********* 2026-03-10 00:48:34.975324 | orchestrator | =============================================================================== 2026-03-10 00:48:34.975327 | orchestrator | osism.services.frr : Install frr package ------------------------------- 12.52s 2026-03-10 00:48:34.975331 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 4.70s 2026-03-10 00:48:34.975335 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.77s 2026-03-10 00:48:34.975339 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.53s 2026-03-10 00:48:34.975343 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.52s 2026-03-10 00:48:34.975347 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.49s 2026-03-10 00:48:34.975352 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.25s 2026-03-10 00:48:34.975356 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.14s 2026-03-10 00:48:34.975360 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.96s 2026-03-10 00:48:34.975364 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.26s 2026-03-10 00:48:34.975368 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.22s 2026-03-10 00:48:34.975373 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.18s 2026-03-10 00:48:34.975377 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.17s 2026-03-10 00:48:34.975381 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.16s 2026-03-10 00:48:34.975385 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.16s 2026-03-10 00:48:35.324939 | orchestrator | 2026-03-10 00:48:35.326725 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Tue Mar 10 00:48:35 UTC 2026 2026-03-10 00:48:35.326792 | orchestrator | 2026-03-10 00:48:37.447680 | orchestrator | 2026-03-10 00:48:37 | INFO  | Collection nutshell is prepared for execution 2026-03-10 00:48:37.447726 | orchestrator | 2026-03-10 00:48:37 | INFO  | A [0] - dotfiles 2026-03-10 00:48:47.556279 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [0] - homer 2026-03-10 00:48:47.556356 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [0] - netdata 2026-03-10 00:48:47.556364 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [0] - openstackclient 2026-03-10 00:48:47.556371 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [0] - phpmyadmin 2026-03-10 00:48:47.556376 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [0] - common 2026-03-10 00:48:47.558635 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [1] -- loadbalancer 2026-03-10 00:48:47.558684 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [2] --- opensearch 2026-03-10 00:48:47.558706 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [2] --- mariadb-ng 2026-03-10 00:48:47.558914 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [3] ---- horizon 2026-03-10 00:48:47.558930 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [3] ---- keystone 2026-03-10 00:48:47.559243 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [4] ----- neutron 2026-03-10 00:48:47.559485 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [5] ------ wait-for-nova 2026-03-10 00:48:47.559498 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [6] ------- octavia 2026-03-10 00:48:47.561112 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [4] ----- barbican 2026-03-10 00:48:47.561187 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [4] ----- designate 2026-03-10 00:48:47.561256 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [4] ----- ironic 2026-03-10 00:48:47.561505 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [4] ----- placement 2026-03-10 00:48:47.561763 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [4] ----- magnum 2026-03-10 00:48:47.562316 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [1] -- openvswitch 2026-03-10 00:48:47.562526 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [2] --- ovn 2026-03-10 00:48:47.562843 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [1] -- memcached 2026-03-10 00:48:47.563046 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [1] -- redis 2026-03-10 00:48:47.563677 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [1] -- rabbitmq-ng 2026-03-10 00:48:47.563737 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [0] - kubernetes 2026-03-10 00:48:47.565943 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [1] -- kubeconfig 2026-03-10 00:48:47.565993 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [1] -- copy-kubeconfig 2026-03-10 00:48:47.566751 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [0] - ceph 2026-03-10 00:48:47.568161 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [1] -- ceph-pools 2026-03-10 00:48:47.568346 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [2] --- copy-ceph-keys 2026-03-10 00:48:47.568499 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [3] ---- cephclient 2026-03-10 00:48:47.569266 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-10 00:48:47.569337 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [4] ----- wait-for-keystone 2026-03-10 00:48:47.569354 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-10 00:48:47.569509 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [5] ------ glance 2026-03-10 00:48:47.569717 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [5] ------ cinder 2026-03-10 00:48:47.569867 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [5] ------ nova 2026-03-10 00:48:47.570236 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [4] ----- prometheus 2026-03-10 00:48:47.570515 | orchestrator | 2026-03-10 00:48:47 | INFO  | A [5] ------ grafana 2026-03-10 00:48:47.796339 | orchestrator | 2026-03-10 00:48:47 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-10 00:48:47.796406 | orchestrator | 2026-03-10 00:48:47 | INFO  | Tasks are running in the background 2026-03-10 00:48:51.932417 | orchestrator | 2026-03-10 00:48:51 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-10 00:48:54.106590 | orchestrator | 2026-03-10 00:48:54 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:48:54.106667 | orchestrator | 2026-03-10 00:48:54 | INFO  | Task d2e65125-ed2b-4861-a31c-93a8643111e4 is in state STARTED 2026-03-10 00:48:54.107643 | orchestrator | 2026-03-10 00:48:54 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:48:54.109738 | orchestrator | 2026-03-10 00:48:54 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:48:54.110440 | orchestrator | 2026-03-10 00:48:54 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:48:54.113452 | orchestrator | 2026-03-10 00:48:54 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:48:54.114638 | orchestrator | 2026-03-10 00:48:54 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:48:54.116101 | orchestrator | 2026-03-10 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:48:57.177993 | orchestrator | 2026-03-10 00:48:57 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:48:57.184031 | orchestrator | 2026-03-10 00:48:57 | INFO  | Task d2e65125-ed2b-4861-a31c-93a8643111e4 is in state STARTED 2026-03-10 00:48:57.184102 | orchestrator | 2026-03-10 00:48:57 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:48:57.189393 | orchestrator | 2026-03-10 00:48:57 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:48:57.191542 | orchestrator | 2026-03-10 00:48:57 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:48:57.192444 | orchestrator | 2026-03-10 00:48:57 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:48:57.195634 | orchestrator | 2026-03-10 00:48:57 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:48:57.195680 | orchestrator | 2026-03-10 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:00.369458 | orchestrator | 2026-03-10 00:49:00 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:00.369587 | orchestrator | 2026-03-10 00:49:00 | INFO  | Task d2e65125-ed2b-4861-a31c-93a8643111e4 is in state STARTED 2026-03-10 00:49:00.369612 | orchestrator | 2026-03-10 00:49:00 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:00.369633 | orchestrator | 2026-03-10 00:49:00 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:00.369652 | orchestrator | 2026-03-10 00:49:00 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:00.369671 | orchestrator | 2026-03-10 00:49:00 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:00.369689 | orchestrator | 2026-03-10 00:49:00 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:00.369708 | orchestrator | 2026-03-10 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:03.361633 | orchestrator | 2026-03-10 00:49:03 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:03.362295 | orchestrator | 2026-03-10 00:49:03 | INFO  | Task d2e65125-ed2b-4861-a31c-93a8643111e4 is in state STARTED 2026-03-10 00:49:03.363799 | orchestrator | 2026-03-10 00:49:03 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:03.363873 | orchestrator | 2026-03-10 00:49:03 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:03.366784 | orchestrator | 2026-03-10 00:49:03 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:03.367501 | orchestrator | 2026-03-10 00:49:03 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:03.368343 | orchestrator | 2026-03-10 00:49:03 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:03.370231 | orchestrator | 2026-03-10 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:06.462386 | orchestrator | 2026-03-10 00:49:06 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:06.463903 | orchestrator | 2026-03-10 00:49:06 | INFO  | Task d2e65125-ed2b-4861-a31c-93a8643111e4 is in state STARTED 2026-03-10 00:49:06.469321 | orchestrator | 2026-03-10 00:49:06 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:06.478343 | orchestrator | 2026-03-10 00:49:06 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:06.484993 | orchestrator | 2026-03-10 00:49:06 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:06.490063 | orchestrator | 2026-03-10 00:49:06 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:06.490680 | orchestrator | 2026-03-10 00:49:06 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:06.490782 | orchestrator | 2026-03-10 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:09.584393 | orchestrator | 2026-03-10 00:49:09 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:09.584487 | orchestrator | 2026-03-10 00:49:09 | INFO  | Task d2e65125-ed2b-4861-a31c-93a8643111e4 is in state STARTED 2026-03-10 00:49:09.585061 | orchestrator | 2026-03-10 00:49:09 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:09.602627 | orchestrator | 2026-03-10 00:49:09 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:09.616889 | orchestrator | 2026-03-10 00:49:09 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:09.623048 | orchestrator | 2026-03-10 00:49:09 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:09.626193 | orchestrator | 2026-03-10 00:49:09 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:09.626255 | orchestrator | 2026-03-10 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:12.887213 | orchestrator | 2026-03-10 00:49:12 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:12.887302 | orchestrator | 2026-03-10 00:49:12 | INFO  | Task d2e65125-ed2b-4861-a31c-93a8643111e4 is in state STARTED 2026-03-10 00:49:12.887321 | orchestrator | 2026-03-10 00:49:12 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:12.887329 | orchestrator | 2026-03-10 00:49:12 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:12.887336 | orchestrator | 2026-03-10 00:49:12 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:12.887343 | orchestrator | 2026-03-10 00:49:12 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:12.887349 | orchestrator | 2026-03-10 00:49:12 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:12.887356 | orchestrator | 2026-03-10 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:16.019420 | orchestrator | 2026-03-10 00:49:16 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:16.019506 | orchestrator | 2026-03-10 00:49:16 | INFO  | Task d2e65125-ed2b-4861-a31c-93a8643111e4 is in state STARTED 2026-03-10 00:49:16.019522 | orchestrator | 2026-03-10 00:49:16 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:16.020072 | orchestrator | 2026-03-10 00:49:16 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:16.023352 | orchestrator | 2026-03-10 00:49:16 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:16.028814 | orchestrator | 2026-03-10 00:49:16 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:16.030864 | orchestrator | 2026-03-10 00:49:16 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:16.030916 | orchestrator | 2026-03-10 00:49:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:19.166340 | orchestrator | 2026-03-10 00:49:19 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:19.166425 | orchestrator | 2026-03-10 00:49:19 | INFO  | Task d2e65125-ed2b-4861-a31c-93a8643111e4 is in state STARTED 2026-03-10 00:49:19.166439 | orchestrator | 2026-03-10 00:49:19 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:19.166450 | orchestrator | 2026-03-10 00:49:19 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:19.166459 | orchestrator | 2026-03-10 00:49:19 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:19.166469 | orchestrator | 2026-03-10 00:49:19 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:19.166478 | orchestrator | 2026-03-10 00:49:19 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:19.166488 | orchestrator | 2026-03-10 00:49:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:22.592392 | orchestrator | 2026-03-10 00:49:22 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:22.592491 | orchestrator | 2026-03-10 00:49:22 | INFO  | Task d2e65125-ed2b-4861-a31c-93a8643111e4 is in state STARTED 2026-03-10 00:49:22.592507 | orchestrator | 2026-03-10 00:49:22 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:22.592518 | orchestrator | 2026-03-10 00:49:22 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:22.592530 | orchestrator | 2026-03-10 00:49:22 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:22.592540 | orchestrator | 2026-03-10 00:49:22 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:22.592551 | orchestrator | 2026-03-10 00:49:22 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:22.592562 | orchestrator | 2026-03-10 00:49:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:25.721116 | orchestrator | 2026-03-10 00:49:25 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:25.721335 | orchestrator | 2026-03-10 00:49:25 | INFO  | Task d2e65125-ed2b-4861-a31c-93a8643111e4 is in state STARTED 2026-03-10 00:49:25.738354 | orchestrator | 2026-03-10 00:49:25 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:25.738444 | orchestrator | 2026-03-10 00:49:25 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:25.755486 | orchestrator | 2026-03-10 00:49:25 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:25.767070 | orchestrator | 2026-03-10 00:49:25 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:25.769083 | orchestrator | 2026-03-10 00:49:25 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:25.769209 | orchestrator | 2026-03-10 00:49:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:28.935991 | orchestrator | 2026-03-10 00:49:28 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:28.937573 | orchestrator | 2026-03-10 00:49:28.937637 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-10 00:49:28.937651 | orchestrator | 2026-03-10 00:49:28.937662 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-10 00:49:28.937674 | orchestrator | Tuesday 10 March 2026 00:49:10 +0000 (0:00:01.571) 0:00:01.571 ********* 2026-03-10 00:49:28.937715 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:49:28.937727 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:49:28.937738 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:49:28.937749 | orchestrator | changed: [testbed-manager] 2026-03-10 00:49:28.937760 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:49:28.937770 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:49:28.937781 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:49:28.937791 | orchestrator | 2026-03-10 00:49:28.937803 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-10 00:49:28.937813 | orchestrator | Tuesday 10 March 2026 00:49:15 +0000 (0:00:04.937) 0:00:06.508 ********* 2026-03-10 00:49:28.937825 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-10 00:49:28.937836 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-10 00:49:28.937846 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-10 00:49:28.937857 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-10 00:49:28.937868 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-10 00:49:28.937879 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-10 00:49:28.937898 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-10 00:49:28.937910 | orchestrator | 2026-03-10 00:49:28.937921 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-10 00:49:28.937932 | orchestrator | Tuesday 10 March 2026 00:49:17 +0000 (0:00:02.102) 0:00:08.611 ********* 2026-03-10 00:49:28.937947 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:49:16.646208', 'end': '2026-03-10 00:49:16.651525', 'delta': '0:00:00.005317', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:49:28.937968 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:49:16.959326', 'end': '2026-03-10 00:49:16.968050', 'delta': '0:00:00.008724', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:49:28.937980 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:49:16.567536', 'end': '2026-03-10 00:49:16.576215', 'delta': '0:00:00.008679', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:49:28.938077 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:49:16.835096', 'end': '2026-03-10 00:49:16.843501', 'delta': '0:00:00.008405', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:49:28.938098 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:49:17.260053', 'end': '2026-03-10 00:49:17.267765', 'delta': '0:00:00.007712', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:49:28.938112 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:49:17.345732', 'end': '2026-03-10 00:49:17.356055', 'delta': '0:00:00.010323', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:49:28.938150 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-10 00:49:17.521457', 'end': '2026-03-10 00:49:17.530336', 'delta': '0:00:00.008879', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-10 00:49:28.938164 | orchestrator | 2026-03-10 00:49:28.938177 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-10 00:49:28.938190 | orchestrator | Tuesday 10 March 2026 00:49:20 +0000 (0:00:02.772) 0:00:11.383 ********* 2026-03-10 00:49:28.938202 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-10 00:49:28.938214 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-10 00:49:28.938228 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-10 00:49:28.938240 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-10 00:49:28.938253 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-10 00:49:28.938273 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-10 00:49:28.938286 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-10 00:49:28.938298 | orchestrator | 2026-03-10 00:49:28.938310 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-10 00:49:28.938323 | orchestrator | Tuesday 10 March 2026 00:49:21 +0000 (0:00:01.360) 0:00:12.744 ********* 2026-03-10 00:49:28.938335 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-10 00:49:28.938348 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-10 00:49:28.938359 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-10 00:49:28.938370 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-10 00:49:28.938380 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-10 00:49:28.938391 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-10 00:49:28.938402 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-10 00:49:28.938413 | orchestrator | 2026-03-10 00:49:28.938424 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:49:28.938442 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:49:28.938455 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:49:28.938467 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:49:28.938478 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:49:28.938489 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:49:28.938499 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:49:28.938524 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:49:28.938536 | orchestrator | 2026-03-10 00:49:28.938546 | orchestrator | 2026-03-10 00:49:28.938558 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:49:28.938568 | orchestrator | Tuesday 10 March 2026 00:49:26 +0000 (0:00:04.772) 0:00:17.517 ********* 2026-03-10 00:49:28.938579 | orchestrator | =============================================================================== 2026-03-10 00:49:28.938590 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.94s 2026-03-10 00:49:28.938601 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 4.77s 2026-03-10 00:49:28.938612 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 2.77s 2026-03-10 00:49:28.938622 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.10s 2026-03-10 00:49:28.938633 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.36s 2026-03-10 00:49:28.938644 | orchestrator | 2026-03-10 00:49:28 | INFO  | Task d2e65125-ed2b-4861-a31c-93a8643111e4 is in state SUCCESS 2026-03-10 00:49:28.941690 | orchestrator | 2026-03-10 00:49:28 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:28.945824 | orchestrator | 2026-03-10 00:49:28 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:28.946628 | orchestrator | 2026-03-10 00:49:28 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:28.948026 | orchestrator | 2026-03-10 00:49:28 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:28.948770 | orchestrator | 2026-03-10 00:49:28 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:49:28.955814 | orchestrator | 2026-03-10 00:49:28 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:28.955893 | orchestrator | 2026-03-10 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:32.157735 | orchestrator | 2026-03-10 00:49:32 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:32.157892 | orchestrator | 2026-03-10 00:49:32 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:32.159176 | orchestrator | 2026-03-10 00:49:32 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:32.161554 | orchestrator | 2026-03-10 00:49:32 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:32.167735 | orchestrator | 2026-03-10 00:49:32 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:32.170570 | orchestrator | 2026-03-10 00:49:32 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:49:32.175930 | orchestrator | 2026-03-10 00:49:32 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:32.175970 | orchestrator | 2026-03-10 00:49:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:35.276051 | orchestrator | 2026-03-10 00:49:35 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:35.278390 | orchestrator | 2026-03-10 00:49:35 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:35.282550 | orchestrator | 2026-03-10 00:49:35 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:35.290784 | orchestrator | 2026-03-10 00:49:35 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:35.292431 | orchestrator | 2026-03-10 00:49:35 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:35.295161 | orchestrator | 2026-03-10 00:49:35 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:49:35.296989 | orchestrator | 2026-03-10 00:49:35 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:35.297034 | orchestrator | 2026-03-10 00:49:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:38.354298 | orchestrator | 2026-03-10 00:49:38 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:38.354373 | orchestrator | 2026-03-10 00:49:38 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:38.355736 | orchestrator | 2026-03-10 00:49:38 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:38.359317 | orchestrator | 2026-03-10 00:49:38 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:38.362799 | orchestrator | 2026-03-10 00:49:38 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:38.367106 | orchestrator | 2026-03-10 00:49:38 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:49:38.367192 | orchestrator | 2026-03-10 00:49:38 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:38.367793 | orchestrator | 2026-03-10 00:49:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:41.464100 | orchestrator | 2026-03-10 00:49:41 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:41.465507 | orchestrator | 2026-03-10 00:49:41 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:41.468681 | orchestrator | 2026-03-10 00:49:41 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:41.469790 | orchestrator | 2026-03-10 00:49:41 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:41.471270 | orchestrator | 2026-03-10 00:49:41 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:41.472306 | orchestrator | 2026-03-10 00:49:41 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:49:41.473253 | orchestrator | 2026-03-10 00:49:41 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:41.473311 | orchestrator | 2026-03-10 00:49:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:44.647337 | orchestrator | 2026-03-10 00:49:44 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:44.647433 | orchestrator | 2026-03-10 00:49:44 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:44.647443 | orchestrator | 2026-03-10 00:49:44 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:44.647451 | orchestrator | 2026-03-10 00:49:44 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:44.647458 | orchestrator | 2026-03-10 00:49:44 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:44.647465 | orchestrator | 2026-03-10 00:49:44 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:49:44.647518 | orchestrator | 2026-03-10 00:49:44 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:44.647527 | orchestrator | 2026-03-10 00:49:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:47.617177 | orchestrator | 2026-03-10 00:49:47 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:47.617530 | orchestrator | 2026-03-10 00:49:47 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:47.618329 | orchestrator | 2026-03-10 00:49:47 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:47.619009 | orchestrator | 2026-03-10 00:49:47 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:47.621096 | orchestrator | 2026-03-10 00:49:47 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:47.626361 | orchestrator | 2026-03-10 00:49:47 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:49:47.627306 | orchestrator | 2026-03-10 00:49:47 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:47.627337 | orchestrator | 2026-03-10 00:49:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:50.766152 | orchestrator | 2026-03-10 00:49:50 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:50.769376 | orchestrator | 2026-03-10 00:49:50 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:50.772383 | orchestrator | 2026-03-10 00:49:50 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:50.774869 | orchestrator | 2026-03-10 00:49:50 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:50.779576 | orchestrator | 2026-03-10 00:49:50 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:50.782140 | orchestrator | 2026-03-10 00:49:50 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:49:50.784947 | orchestrator | 2026-03-10 00:49:50 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:50.785028 | orchestrator | 2026-03-10 00:49:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:54.330660 | orchestrator | 2026-03-10 00:49:54 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:54.330745 | orchestrator | 2026-03-10 00:49:54 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state STARTED 2026-03-10 00:49:54.330761 | orchestrator | 2026-03-10 00:49:54 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:54.330773 | orchestrator | 2026-03-10 00:49:54 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:54.330784 | orchestrator | 2026-03-10 00:49:54 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:54.330795 | orchestrator | 2026-03-10 00:49:54 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:49:54.330806 | orchestrator | 2026-03-10 00:49:54 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:54.330817 | orchestrator | 2026-03-10 00:49:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:49:57.405095 | orchestrator | 2026-03-10 00:49:57 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:49:57.405209 | orchestrator | 2026-03-10 00:49:57 | INFO  | Task c9eebeb8-32a9-40ce-aca4-d3746681f926 is in state SUCCESS 2026-03-10 00:49:57.406124 | orchestrator | 2026-03-10 00:49:57 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:49:57.406800 | orchestrator | 2026-03-10 00:49:57 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:49:57.407562 | orchestrator | 2026-03-10 00:49:57 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:49:57.410418 | orchestrator | 2026-03-10 00:49:57 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:49:57.412686 | orchestrator | 2026-03-10 00:49:57 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:49:57.412730 | orchestrator | 2026-03-10 00:49:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:00.453279 | orchestrator | 2026-03-10 00:50:00 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:00.455337 | orchestrator | 2026-03-10 00:50:00 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:50:00.457093 | orchestrator | 2026-03-10 00:50:00 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:00.459631 | orchestrator | 2026-03-10 00:50:00 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:00.461815 | orchestrator | 2026-03-10 00:50:00 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:00.464647 | orchestrator | 2026-03-10 00:50:00 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:00.464698 | orchestrator | 2026-03-10 00:50:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:03.606076 | orchestrator | 2026-03-10 00:50:03 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:03.608452 | orchestrator | 2026-03-10 00:50:03 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:50:03.608980 | orchestrator | 2026-03-10 00:50:03 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:03.609585 | orchestrator | 2026-03-10 00:50:03 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:03.611437 | orchestrator | 2026-03-10 00:50:03 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:03.612153 | orchestrator | 2026-03-10 00:50:03 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:03.612173 | orchestrator | 2026-03-10 00:50:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:06.684288 | orchestrator | 2026-03-10 00:50:06 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:06.689416 | orchestrator | 2026-03-10 00:50:06 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state STARTED 2026-03-10 00:50:06.691170 | orchestrator | 2026-03-10 00:50:06 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:06.693641 | orchestrator | 2026-03-10 00:50:06 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:06.694944 | orchestrator | 2026-03-10 00:50:06 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:06.697203 | orchestrator | 2026-03-10 00:50:06 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:06.697261 | orchestrator | 2026-03-10 00:50:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:09.817128 | orchestrator | 2026-03-10 00:50:09 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:09.817273 | orchestrator | 2026-03-10 00:50:09 | INFO  | Task b39566dc-d35c-4888-bb95-3f6dcfeb42f4 is in state SUCCESS 2026-03-10 00:50:09.818717 | orchestrator | 2026-03-10 00:50:09 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:09.819663 | orchestrator | 2026-03-10 00:50:09 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:09.821262 | orchestrator | 2026-03-10 00:50:09 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:09.822153 | orchestrator | 2026-03-10 00:50:09 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:09.822178 | orchestrator | 2026-03-10 00:50:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:12.881851 | orchestrator | 2026-03-10 00:50:12 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:12.884046 | orchestrator | 2026-03-10 00:50:12 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:12.885136 | orchestrator | 2026-03-10 00:50:12 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:12.887958 | orchestrator | 2026-03-10 00:50:12 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:12.888633 | orchestrator | 2026-03-10 00:50:12 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:12.888661 | orchestrator | 2026-03-10 00:50:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:15.933279 | orchestrator | 2026-03-10 00:50:15 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:15.933883 | orchestrator | 2026-03-10 00:50:15 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:15.936611 | orchestrator | 2026-03-10 00:50:15 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:15.939514 | orchestrator | 2026-03-10 00:50:15 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:15.942528 | orchestrator | 2026-03-10 00:50:15 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:15.943014 | orchestrator | 2026-03-10 00:50:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:19.020458 | orchestrator | 2026-03-10 00:50:19 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:19.020528 | orchestrator | 2026-03-10 00:50:19 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:19.020538 | orchestrator | 2026-03-10 00:50:19 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:19.021017 | orchestrator | 2026-03-10 00:50:19 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:19.025022 | orchestrator | 2026-03-10 00:50:19 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:19.025062 | orchestrator | 2026-03-10 00:50:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:22.057530 | orchestrator | 2026-03-10 00:50:22 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:22.058438 | orchestrator | 2026-03-10 00:50:22 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:22.062049 | orchestrator | 2026-03-10 00:50:22 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:22.062125 | orchestrator | 2026-03-10 00:50:22 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:22.063366 | orchestrator | 2026-03-10 00:50:22 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:22.063396 | orchestrator | 2026-03-10 00:50:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:25.144286 | orchestrator | 2026-03-10 00:50:25 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:25.144938 | orchestrator | 2026-03-10 00:50:25 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:25.146763 | orchestrator | 2026-03-10 00:50:25 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:25.149316 | orchestrator | 2026-03-10 00:50:25 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:25.153554 | orchestrator | 2026-03-10 00:50:25 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:25.153607 | orchestrator | 2026-03-10 00:50:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:28.244981 | orchestrator | 2026-03-10 00:50:28 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:28.249303 | orchestrator | 2026-03-10 00:50:28 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:28.257663 | orchestrator | 2026-03-10 00:50:28 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:28.257900 | orchestrator | 2026-03-10 00:50:28 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:28.260037 | orchestrator | 2026-03-10 00:50:28 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:28.260100 | orchestrator | 2026-03-10 00:50:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:31.345361 | orchestrator | 2026-03-10 00:50:31 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:31.347443 | orchestrator | 2026-03-10 00:50:31 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:31.352421 | orchestrator | 2026-03-10 00:50:31 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:31.352511 | orchestrator | 2026-03-10 00:50:31 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:31.353726 | orchestrator | 2026-03-10 00:50:31 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:31.353764 | orchestrator | 2026-03-10 00:50:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:34.515556 | orchestrator | 2026-03-10 00:50:34 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:34.516509 | orchestrator | 2026-03-10 00:50:34 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:34.517807 | orchestrator | 2026-03-10 00:50:34 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:34.519386 | orchestrator | 2026-03-10 00:50:34 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:34.520475 | orchestrator | 2026-03-10 00:50:34 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:34.520510 | orchestrator | 2026-03-10 00:50:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:37.641127 | orchestrator | 2026-03-10 00:50:37 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:37.641665 | orchestrator | 2026-03-10 00:50:37 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:37.643222 | orchestrator | 2026-03-10 00:50:37 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:37.643267 | orchestrator | 2026-03-10 00:50:37 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:37.643930 | orchestrator | 2026-03-10 00:50:37 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:37.643959 | orchestrator | 2026-03-10 00:50:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:40.796083 | orchestrator | 2026-03-10 00:50:40 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:40.797849 | orchestrator | 2026-03-10 00:50:40 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:40.800203 | orchestrator | 2026-03-10 00:50:40 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:40.802406 | orchestrator | 2026-03-10 00:50:40 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:40.805534 | orchestrator | 2026-03-10 00:50:40 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:40.805605 | orchestrator | 2026-03-10 00:50:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:43.910162 | orchestrator | 2026-03-10 00:50:43 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:43.917969 | orchestrator | 2026-03-10 00:50:43 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:43.923378 | orchestrator | 2026-03-10 00:50:43 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:43.923471 | orchestrator | 2026-03-10 00:50:43 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:43.929720 | orchestrator | 2026-03-10 00:50:43 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:43.929793 | orchestrator | 2026-03-10 00:50:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:47.033924 | orchestrator | 2026-03-10 00:50:47 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:47.035911 | orchestrator | 2026-03-10 00:50:47 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:47.037664 | orchestrator | 2026-03-10 00:50:47 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:47.043496 | orchestrator | 2026-03-10 00:50:47 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:47.043941 | orchestrator | 2026-03-10 00:50:47 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:47.043963 | orchestrator | 2026-03-10 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:50.113954 | orchestrator | 2026-03-10 00:50:50 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:50.114014 | orchestrator | 2026-03-10 00:50:50 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:50.114094 | orchestrator | 2026-03-10 00:50:50 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:50.119936 | orchestrator | 2026-03-10 00:50:50 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state STARTED 2026-03-10 00:50:50.120495 | orchestrator | 2026-03-10 00:50:50 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:50.120520 | orchestrator | 2026-03-10 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:53.205587 | orchestrator | 2026-03-10 00:50:53 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:53.207734 | orchestrator | 2026-03-10 00:50:53 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:53.209805 | orchestrator | 2026-03-10 00:50:53 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:53.209851 | orchestrator | 2026-03-10 00:50:53 | INFO  | Task 597c8702-ee71-4836-aa48-f87a6227930f is in state SUCCESS 2026-03-10 00:50:53.210650 | orchestrator | 2026-03-10 00:50:53.210694 | orchestrator | 2026-03-10 00:50:53.210700 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-10 00:50:53.210706 | orchestrator | 2026-03-10 00:50:53.210713 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-10 00:50:53.210720 | orchestrator | Tuesday 10 March 2026 00:49:10 +0000 (0:00:01.151) 0:00:01.151 ********* 2026-03-10 00:50:53.210726 | orchestrator | ok: [testbed-manager] => { 2026-03-10 00:50:53.210734 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-10 00:50:53.210741 | orchestrator | } 2026-03-10 00:50:53.210747 | orchestrator | 2026-03-10 00:50:53.210782 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-10 00:50:53.210789 | orchestrator | Tuesday 10 March 2026 00:49:10 +0000 (0:00:00.406) 0:00:01.559 ********* 2026-03-10 00:50:53.210861 | orchestrator | ok: [testbed-manager] 2026-03-10 00:50:53.210871 | orchestrator | 2026-03-10 00:50:53.210878 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-10 00:50:53.210884 | orchestrator | Tuesday 10 March 2026 00:49:14 +0000 (0:00:03.289) 0:00:04.849 ********* 2026-03-10 00:50:53.210890 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-10 00:50:53.210896 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-10 00:50:53.210903 | orchestrator | 2026-03-10 00:50:53.210909 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-10 00:50:53.210916 | orchestrator | Tuesday 10 March 2026 00:49:16 +0000 (0:00:01.782) 0:00:06.631 ********* 2026-03-10 00:50:53.210937 | orchestrator | changed: [testbed-manager] 2026-03-10 00:50:53.210944 | orchestrator | 2026-03-10 00:50:53.210967 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-10 00:50:53.210974 | orchestrator | Tuesday 10 March 2026 00:49:20 +0000 (0:00:04.374) 0:00:11.006 ********* 2026-03-10 00:50:53.210979 | orchestrator | changed: [testbed-manager] 2026-03-10 00:50:53.210986 | orchestrator | 2026-03-10 00:50:53.210992 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-10 00:50:53.210998 | orchestrator | Tuesday 10 March 2026 00:49:24 +0000 (0:00:04.240) 0:00:15.246 ********* 2026-03-10 00:50:53.211005 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-10 00:50:53.211011 | orchestrator | ok: [testbed-manager] 2026-03-10 00:50:53.211017 | orchestrator | 2026-03-10 00:50:53.211066 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-10 00:50:53.211071 | orchestrator | Tuesday 10 March 2026 00:49:52 +0000 (0:00:27.498) 0:00:42.744 ********* 2026-03-10 00:50:53.211085 | orchestrator | changed: [testbed-manager] 2026-03-10 00:50:53.211089 | orchestrator | 2026-03-10 00:50:53.211093 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:50:53.211097 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:50:53.211102 | orchestrator | 2026-03-10 00:50:53.211260 | orchestrator | 2026-03-10 00:50:53.211265 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:50:53.211271 | orchestrator | Tuesday 10 March 2026 00:49:55 +0000 (0:00:02.939) 0:00:45.683 ********* 2026-03-10 00:50:53.211279 | orchestrator | =============================================================================== 2026-03-10 00:50:53.211322 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 27.50s 2026-03-10 00:50:53.211337 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 4.37s 2026-03-10 00:50:53.211341 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 4.24s 2026-03-10 00:50:53.211346 | orchestrator | osism.services.homer : Create traefik external network ------------------ 3.29s 2026-03-10 00:50:53.211350 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.94s 2026-03-10 00:50:53.211354 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.78s 2026-03-10 00:50:53.211359 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.41s 2026-03-10 00:50:53.211363 | orchestrator | 2026-03-10 00:50:53.211367 | orchestrator | 2026-03-10 00:50:53.211372 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-10 00:50:53.211377 | orchestrator | 2026-03-10 00:50:53.211383 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-10 00:50:53.211390 | orchestrator | Tuesday 10 March 2026 00:49:09 +0000 (0:00:01.754) 0:00:01.754 ********* 2026-03-10 00:50:53.211441 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-10 00:50:53.211453 | orchestrator | 2026-03-10 00:50:53.211459 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-10 00:50:53.211466 | orchestrator | Tuesday 10 March 2026 00:49:10 +0000 (0:00:01.405) 0:00:03.160 ********* 2026-03-10 00:50:53.211472 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-10 00:50:53.211479 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-10 00:50:53.211485 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-10 00:50:53.211507 | orchestrator | 2026-03-10 00:50:53.211515 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-10 00:50:53.211521 | orchestrator | Tuesday 10 March 2026 00:49:14 +0000 (0:00:03.809) 0:00:06.969 ********* 2026-03-10 00:50:53.211527 | orchestrator | changed: [testbed-manager] 2026-03-10 00:50:53.211541 | orchestrator | 2026-03-10 00:50:53.211547 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-10 00:50:53.211554 | orchestrator | Tuesday 10 March 2026 00:49:17 +0000 (0:00:03.293) 0:00:10.263 ********* 2026-03-10 00:50:53.211572 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-10 00:50:53.211666 | orchestrator | ok: [testbed-manager] 2026-03-10 00:50:53.211673 | orchestrator | 2026-03-10 00:50:53.211680 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-10 00:50:53.211686 | orchestrator | Tuesday 10 March 2026 00:49:57 +0000 (0:00:39.879) 0:00:50.142 ********* 2026-03-10 00:50:53.211693 | orchestrator | changed: [testbed-manager] 2026-03-10 00:50:53.211699 | orchestrator | 2026-03-10 00:50:53.211706 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-10 00:50:53.211713 | orchestrator | Tuesday 10 March 2026 00:49:59 +0000 (0:00:02.418) 0:00:52.561 ********* 2026-03-10 00:50:53.211719 | orchestrator | ok: [testbed-manager] 2026-03-10 00:50:53.211726 | orchestrator | 2026-03-10 00:50:53.211733 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-10 00:50:53.211738 | orchestrator | Tuesday 10 March 2026 00:50:00 +0000 (0:00:00.832) 0:00:53.393 ********* 2026-03-10 00:50:53.211742 | orchestrator | changed: [testbed-manager] 2026-03-10 00:50:53.211746 | orchestrator | 2026-03-10 00:50:53.211749 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-10 00:50:53.211753 | orchestrator | Tuesday 10 March 2026 00:50:04 +0000 (0:00:03.552) 0:00:56.946 ********* 2026-03-10 00:50:53.211757 | orchestrator | changed: [testbed-manager] 2026-03-10 00:50:53.211763 | orchestrator | 2026-03-10 00:50:53.211770 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-10 00:50:53.211776 | orchestrator | Tuesday 10 March 2026 00:50:06 +0000 (0:00:02.185) 0:00:59.132 ********* 2026-03-10 00:50:53.211783 | orchestrator | changed: [testbed-manager] 2026-03-10 00:50:53.211809 | orchestrator | 2026-03-10 00:50:53.211816 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-10 00:50:53.211823 | orchestrator | Tuesday 10 March 2026 00:50:07 +0000 (0:00:01.347) 0:01:00.480 ********* 2026-03-10 00:50:53.211829 | orchestrator | ok: [testbed-manager] 2026-03-10 00:50:53.211835 | orchestrator | 2026-03-10 00:50:53.211838 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:50:53.211842 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:50:53.211847 | orchestrator | 2026-03-10 00:50:53.211850 | orchestrator | 2026-03-10 00:50:53.211854 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:50:53.211858 | orchestrator | Tuesday 10 March 2026 00:50:08 +0000 (0:00:00.440) 0:01:00.920 ********* 2026-03-10 00:50:53.211861 | orchestrator | =============================================================================== 2026-03-10 00:50:53.211865 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 39.88s 2026-03-10 00:50:53.211869 | orchestrator | osism.services.openstackclient : Create required directories ------------ 3.81s 2026-03-10 00:50:53.211873 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 3.55s 2026-03-10 00:50:53.211876 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 3.29s 2026-03-10 00:50:53.211880 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 2.42s 2026-03-10 00:50:53.211884 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 2.19s 2026-03-10 00:50:53.211887 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 1.40s 2026-03-10 00:50:53.211891 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 1.35s 2026-03-10 00:50:53.212208 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.83s 2026-03-10 00:50:53.212223 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.44s 2026-03-10 00:50:53.212227 | orchestrator | 2026-03-10 00:50:53.212231 | orchestrator | 2026-03-10 00:50:53.212234 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-10 00:50:53.212238 | orchestrator | 2026-03-10 00:50:53.212242 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-10 00:50:53.212246 | orchestrator | Tuesday 10 March 2026 00:49:35 +0000 (0:00:00.666) 0:00:00.666 ********* 2026-03-10 00:50:53.212249 | orchestrator | ok: [testbed-manager] 2026-03-10 00:50:53.212268 | orchestrator | 2026-03-10 00:50:53.212273 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-10 00:50:53.212277 | orchestrator | Tuesday 10 March 2026 00:49:37 +0000 (0:00:01.974) 0:00:02.640 ********* 2026-03-10 00:50:53.212281 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-10 00:50:53.212284 | orchestrator | 2026-03-10 00:50:53.212288 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-10 00:50:53.212292 | orchestrator | Tuesday 10 March 2026 00:49:39 +0000 (0:00:01.522) 0:00:04.163 ********* 2026-03-10 00:50:53.212296 | orchestrator | changed: [testbed-manager] 2026-03-10 00:50:53.212300 | orchestrator | 2026-03-10 00:50:53.212304 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-10 00:50:53.212307 | orchestrator | Tuesday 10 March 2026 00:49:43 +0000 (0:00:03.859) 0:00:08.022 ********* 2026-03-10 00:50:53.212311 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-10 00:50:53.212315 | orchestrator | ok: [testbed-manager] 2026-03-10 00:50:53.212319 | orchestrator | 2026-03-10 00:50:53.212323 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-10 00:50:53.212326 | orchestrator | Tuesday 10 March 2026 00:50:44 +0000 (0:01:01.327) 0:01:09.350 ********* 2026-03-10 00:50:53.212330 | orchestrator | changed: [testbed-manager] 2026-03-10 00:50:53.212334 | orchestrator | 2026-03-10 00:50:53.212337 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:50:53.212341 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:50:53.212345 | orchestrator | 2026-03-10 00:50:53.212349 | orchestrator | 2026-03-10 00:50:53.212353 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:50:53.212447 | orchestrator | Tuesday 10 March 2026 00:50:50 +0000 (0:00:06.174) 0:01:15.525 ********* 2026-03-10 00:50:53.212456 | orchestrator | =============================================================================== 2026-03-10 00:50:53.212462 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 61.33s 2026-03-10 00:50:53.212478 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 6.18s 2026-03-10 00:50:53.212484 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 3.86s 2026-03-10 00:50:53.212490 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.97s 2026-03-10 00:50:53.212495 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 1.52s 2026-03-10 00:50:53.212526 | orchestrator | 2026-03-10 00:50:53 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:53.212534 | orchestrator | 2026-03-10 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:56.273379 | orchestrator | 2026-03-10 00:50:56 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:56.278175 | orchestrator | 2026-03-10 00:50:56 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:56.278252 | orchestrator | 2026-03-10 00:50:56 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:56.288980 | orchestrator | 2026-03-10 00:50:56 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:56.289078 | orchestrator | 2026-03-10 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:50:59.315934 | orchestrator | 2026-03-10 00:50:59 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:50:59.318706 | orchestrator | 2026-03-10 00:50:59 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:50:59.321461 | orchestrator | 2026-03-10 00:50:59 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:50:59.322605 | orchestrator | 2026-03-10 00:50:59 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:50:59.322783 | orchestrator | 2026-03-10 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:02.372965 | orchestrator | 2026-03-10 00:51:02 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:02.373086 | orchestrator | 2026-03-10 00:51:02 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:02.373766 | orchestrator | 2026-03-10 00:51:02 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:51:02.377101 | orchestrator | 2026-03-10 00:51:02 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state STARTED 2026-03-10 00:51:02.377166 | orchestrator | 2026-03-10 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:05.446527 | orchestrator | 2026-03-10 00:51:05 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:05.452378 | orchestrator | 2026-03-10 00:51:05 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:05.456728 | orchestrator | 2026-03-10 00:51:05 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:51:05.461601 | orchestrator | 2026-03-10 00:51:05.461718 | orchestrator | 2026-03-10 00:51:05.461740 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:51:05.461758 | orchestrator | 2026-03-10 00:51:05.461775 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:51:05.461792 | orchestrator | Tuesday 10 March 2026 00:49:11 +0000 (0:00:00.448) 0:00:00.448 ********* 2026-03-10 00:51:05.461809 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-10 00:51:05.461846 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-10 00:51:05.461863 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-10 00:51:05.461880 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-10 00:51:05.461897 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-10 00:51:05.461914 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-10 00:51:05.461930 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-10 00:51:05.461946 | orchestrator | 2026-03-10 00:51:05.461963 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-10 00:51:05.461980 | orchestrator | 2026-03-10 00:51:05.461996 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-10 00:51:05.462013 | orchestrator | Tuesday 10 March 2026 00:49:13 +0000 (0:00:02.446) 0:00:02.895 ********* 2026-03-10 00:51:05.462164 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:51:05.462187 | orchestrator | 2026-03-10 00:51:05.462206 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-10 00:51:05.462224 | orchestrator | Tuesday 10 March 2026 00:49:15 +0000 (0:00:01.994) 0:00:04.890 ********* 2026-03-10 00:51:05.462242 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:05.462291 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:05.462310 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:05.462328 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:05.462345 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:51:05.462364 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:51:05.462382 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:51:05.462399 | orchestrator | 2026-03-10 00:51:05.462418 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-10 00:51:05.462436 | orchestrator | Tuesday 10 March 2026 00:49:18 +0000 (0:00:02.815) 0:00:07.705 ********* 2026-03-10 00:51:05.462454 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:05.462471 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:05.462488 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:51:05.462505 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:05.462522 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:51:05.462539 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:51:05.462556 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:05.462573 | orchestrator | 2026-03-10 00:51:05.462591 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-10 00:51:05.462608 | orchestrator | Tuesday 10 March 2026 00:49:23 +0000 (0:00:04.870) 0:00:12.575 ********* 2026-03-10 00:51:05.462625 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:05.462642 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:05.462659 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:05.462676 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:05.462693 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:05.462710 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:05.462726 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:05.462744 | orchestrator | 2026-03-10 00:51:05.462761 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-10 00:51:05.462778 | orchestrator | Tuesday 10 March 2026 00:49:27 +0000 (0:00:03.838) 0:00:16.414 ********* 2026-03-10 00:51:05.462795 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:05.462812 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:05.462829 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:05.462846 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:05.462862 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:05.462878 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:05.462896 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:05.462913 | orchestrator | 2026-03-10 00:51:05.462930 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-10 00:51:05.462947 | orchestrator | Tuesday 10 March 2026 00:49:45 +0000 (0:00:18.666) 0:00:35.080 ********* 2026-03-10 00:51:05.462964 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:05.462981 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:05.462998 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:05.463015 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:05.463056 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:05.463075 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:05.463091 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:05.463107 | orchestrator | 2026-03-10 00:51:05.463124 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-10 00:51:05.463140 | orchestrator | Tuesday 10 March 2026 00:50:27 +0000 (0:00:41.534) 0:01:16.615 ********* 2026-03-10 00:51:05.463157 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:51:05.463176 | orchestrator | 2026-03-10 00:51:05.463192 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-10 00:51:05.463209 | orchestrator | Tuesday 10 March 2026 00:50:29 +0000 (0:00:01.924) 0:01:18.539 ********* 2026-03-10 00:51:05.463225 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-10 00:51:05.463242 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-10 00:51:05.463269 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-10 00:51:05.463286 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-10 00:51:05.463330 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-10 00:51:05.463347 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-10 00:51:05.463363 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-10 00:51:05.463379 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-10 00:51:05.463395 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-10 00:51:05.463412 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-10 00:51:05.463437 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-10 00:51:05.463454 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-10 00:51:05.463470 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-10 00:51:05.463487 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-10 00:51:05.463503 | orchestrator | 2026-03-10 00:51:05.463519 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-10 00:51:05.463536 | orchestrator | Tuesday 10 March 2026 00:50:35 +0000 (0:00:06.687) 0:01:25.226 ********* 2026-03-10 00:51:05.463553 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:05.463569 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:05.463585 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:05.463602 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:05.463619 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:51:05.463636 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:51:05.463653 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:51:05.463670 | orchestrator | 2026-03-10 00:51:05.463688 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-10 00:51:05.463705 | orchestrator | Tuesday 10 March 2026 00:50:37 +0000 (0:00:01.450) 0:01:26.677 ********* 2026-03-10 00:51:05.463722 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:05.463739 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:05.463754 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:05.463770 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:05.463786 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:05.463802 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:05.463818 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:05.463834 | orchestrator | 2026-03-10 00:51:05.463852 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-10 00:51:05.463869 | orchestrator | Tuesday 10 March 2026 00:50:39 +0000 (0:00:02.531) 0:01:29.209 ********* 2026-03-10 00:51:05.463885 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:05.463901 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:05.463917 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:05.463933 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:05.463949 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:51:05.463966 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:51:05.463982 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:51:05.463998 | orchestrator | 2026-03-10 00:51:05.464014 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-10 00:51:05.464032 | orchestrator | Tuesday 10 March 2026 00:50:42 +0000 (0:00:02.350) 0:01:31.559 ********* 2026-03-10 00:51:05.464081 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:05.464091 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:05.464101 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:05.464111 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:51:05.464120 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:51:05.464130 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:51:05.464139 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:05.464149 | orchestrator | 2026-03-10 00:51:05.464159 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-10 00:51:05.464168 | orchestrator | Tuesday 10 March 2026 00:50:45 +0000 (0:00:03.665) 0:01:35.225 ********* 2026-03-10 00:51:05.464188 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-10 00:51:05.464200 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:51:05.464210 | orchestrator | 2026-03-10 00:51:05.464220 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-10 00:51:05.464229 | orchestrator | Tuesday 10 March 2026 00:50:48 +0000 (0:00:02.250) 0:01:37.475 ********* 2026-03-10 00:51:05.464239 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:05.464248 | orchestrator | 2026-03-10 00:51:05.464257 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-10 00:51:05.464267 | orchestrator | Tuesday 10 March 2026 00:50:50 +0000 (0:00:02.577) 0:01:40.053 ********* 2026-03-10 00:51:05.464277 | orchestrator | changed: [testbed-node-0][0m 2026-03-10 00:51:05.464286 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:05.464296 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:05.464305 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:05.464315 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:05.464324 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:05.464333 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:05.464343 | orchestrator | 2026-03-10 00:51:05.464352 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:51:05.464362 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:51:05.464374 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:51:05.464384 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:51:05.464394 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:51:05.464414 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:51:05.464425 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:51:05.464434 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:51:05.464444 | orchestrator | 2026-03-10 00:51:05.464454 | orchestrator | 2026-03-10 00:51:05.464471 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:51:05.464481 | orchestrator | Tuesday 10 March 2026 00:51:02 +0000 (0:00:11.776) 0:01:51.829 ********* 2026-03-10 00:51:05.464491 | orchestrator | =============================================================================== 2026-03-10 00:51:05.464500 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 41.53s 2026-03-10 00:51:05.464509 | orchestrator | osism.services.netdata : Add repository -------------------------------- 18.67s 2026-03-10 00:51:05.464519 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 11.78s 2026-03-10 00:51:05.464529 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.69s 2026-03-10 00:51:05.464538 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 4.87s 2026-03-10 00:51:05.464548 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.84s 2026-03-10 00:51:05.464557 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 3.67s 2026-03-10 00:51:05.464566 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.82s 2026-03-10 00:51:05.464582 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.58s 2026-03-10 00:51:05.464592 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.53s 2026-03-10 00:51:05.464602 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.45s 2026-03-10 00:51:05.464611 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 2.35s 2026-03-10 00:51:05.464621 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 2.25s 2026-03-10 00:51:05.464630 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.99s 2026-03-10 00:51:05.464640 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.92s 2026-03-10 00:51:05.464650 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.45s 2026-03-10 00:51:05.464660 | orchestrator | 2026-03-10 00:51:05 | INFO  | Task 57a77d2c-4f5c-48e4-ab59-c7042e91e904 is in state SUCCESS 2026-03-10 00:51:05.464669 | orchestrator | 2026-03-10 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:08.520155 | orchestrator | 2026-03-10 00:51:08 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:08.522198 | orchestrator | 2026-03-10 00:51:08 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:08.524492 | orchestrator | 2026-03-10 00:51:08 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:51:08.525262 | orchestrator | 2026-03-10 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:11.605807 | orchestrator | 2026-03-10 00:51:11 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:11.606048 | orchestrator | 2026-03-10 00:51:11 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:11.607143 | orchestrator | 2026-03-10 00:51:11 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:51:11.607171 | orchestrator | 2026-03-10 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:14.644544 | orchestrator | 2026-03-10 00:51:14 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:14.645089 | orchestrator | 2026-03-10 00:51:14 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:14.646500 | orchestrator | 2026-03-10 00:51:14 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:51:14.646597 | orchestrator | 2026-03-10 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:17.688555 | orchestrator | 2026-03-10 00:51:17 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:17.690835 | orchestrator | 2026-03-10 00:51:17 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:17.693669 | orchestrator | 2026-03-10 00:51:17 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:51:17.693726 | orchestrator | 2026-03-10 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:20.787514 | orchestrator | 2026-03-10 00:51:20 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:20.791417 | orchestrator | 2026-03-10 00:51:20 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:20.796625 | orchestrator | 2026-03-10 00:51:20 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:51:20.796708 | orchestrator | 2026-03-10 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:23.851138 | orchestrator | 2026-03-10 00:51:23 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:23.851958 | orchestrator | 2026-03-10 00:51:23 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:23.853425 | orchestrator | 2026-03-10 00:51:23 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:51:23.853472 | orchestrator | 2026-03-10 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:26.909760 | orchestrator | 2026-03-10 00:51:26 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:26.911254 | orchestrator | 2026-03-10 00:51:26 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:26.912397 | orchestrator | 2026-03-10 00:51:26 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state STARTED 2026-03-10 00:51:26.912669 | orchestrator | 2026-03-10 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:29.958284 | orchestrator | 2026-03-10 00:51:29 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:29.958789 | orchestrator | 2026-03-10 00:51:29 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:29.964315 | orchestrator | 2026-03-10 00:51:29 | INFO  | Task a667f894-a4d9-42df-998c-61957a35c71f is in state SUCCESS 2026-03-10 00:51:29.967818 | orchestrator | 2026-03-10 00:51:29.967935 | orchestrator | 2026-03-10 00:51:29.968047 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-10 00:51:29.968064 | orchestrator | 2026-03-10 00:51:29.968076 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-10 00:51:29.968088 | orchestrator | Tuesday 10 March 2026 00:48:53 +0000 (0:00:00.316) 0:00:00.316 ********* 2026-03-10 00:51:29.968101 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:51:29.968114 | orchestrator | 2026-03-10 00:51:29.968125 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-10 00:51:29.968136 | orchestrator | Tuesday 10 March 2026 00:48:55 +0000 (0:00:01.913) 0:00:02.229 ********* 2026-03-10 00:51:29.968147 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:51:29.968158 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:51:29.968169 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:51:29.968180 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:51:29.968191 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:51:29.968201 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:51:29.968212 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:51:29.968224 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:51:29.968235 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:51:29.968246 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:51:29.968257 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:51:29.968268 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:51:29.968278 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:51:29.968289 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:51:29.968302 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:51:29.968344 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:51:29.968417 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-10 00:51:29.968486 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:51:29.968506 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-10 00:51:29.968525 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:51:29.968537 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-10 00:51:29.968578 | orchestrator | 2026-03-10 00:51:29.968592 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-10 00:51:29.968603 | orchestrator | Tuesday 10 March 2026 00:49:02 +0000 (0:00:06.300) 0:00:08.529 ********* 2026-03-10 00:51:29.968616 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:51:29.968629 | orchestrator | 2026-03-10 00:51:29.968653 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-10 00:51:29.968665 | orchestrator | Tuesday 10 March 2026 00:49:04 +0000 (0:00:01.830) 0:00:10.359 ********* 2026-03-10 00:51:29.968697 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.968715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.968755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.968768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.968780 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.968803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.968816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.968833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.968846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.968879 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.968892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.968904 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.968923 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.968934 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.968945 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.968957 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.968976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.968996 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.969008 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.969056 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.969087 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.969107 | orchestrator | 2026-03-10 00:51:29.969126 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-10 00:51:29.969143 | orchestrator | Tuesday 10 March 2026 00:49:11 +0000 (0:00:07.515) 0:00:17.875 ********* 2026-03-10 00:51:29.969155 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969167 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969184 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969196 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:51:29.969208 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969266 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:29.969280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969344 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969364 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969383 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:29.969399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969411 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:29.969429 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969449 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969473 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:29.969484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969527 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:29.969539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969576 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969587 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:29.969598 | orchestrator | 2026-03-10 00:51:29.969609 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-10 00:51:29.969620 | orchestrator | Tuesday 10 March 2026 00:49:13 +0000 (0:00:02.152) 0:00:20.027 ********* 2026-03-10 00:51:29.969631 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969642 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969654 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969664 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:51:29.969676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969692 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969721 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:29.969747 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969758 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969781 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:29.969792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969804 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969831 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969878 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969901 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969912 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:29.969923 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:29.969934 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:29.969945 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-10 00:51:29.969961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969979 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.969990 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:29.970001 | orchestrator | 2026-03-10 00:51:29.970151 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-10 00:51:29.970170 | orchestrator | Tuesday 10 March 2026 00:49:15 +0000 (0:00:02.228) 0:00:22.255 ********* 2026-03-10 00:51:29.970181 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:51:29.970192 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:29.970203 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:29.970213 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:29.970241 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:29.970261 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:29.970273 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:29.970283 | orchestrator | 2026-03-10 00:51:29.970294 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-10 00:51:29.970305 | orchestrator | Tuesday 10 March 2026 00:49:17 +0000 (0:00:01.351) 0:00:23.607 ********* 2026-03-10 00:51:29.970316 | orchestrator | skipping: [testbed-manager] 2026-03-10 00:51:29.970327 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:51:29.970338 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:51:29.970348 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:51:29.970359 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:51:29.970369 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:51:29.970380 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:51:29.970390 | orchestrator | 2026-03-10 00:51:29.970401 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-10 00:51:29.970412 | orchestrator | Tuesday 10 March 2026 00:49:18 +0000 (0:00:01.260) 0:00:24.868 ********* 2026-03-10 00:51:29.970423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.970435 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.970446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.970459 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.970478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970489 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.970506 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970516 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.970527 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970574 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.970585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970612 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970623 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970644 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970666 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970687 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970698 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.970708 | orchestrator | 2026-03-10 00:51:29.970717 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-10 00:51:29.970727 | orchestrator | Tuesday 10 March 2026 00:49:28 +0000 (0:00:09.586) 0:00:34.455 ********* 2026-03-10 00:51:29.970737 | orchestrator | [WARNING]: Skipped 2026-03-10 00:51:29.970748 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-10 00:51:29.970758 | orchestrator | to this access issue: 2026-03-10 00:51:29.970767 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-10 00:51:29.970777 | orchestrator | directory 2026-03-10 00:51:29.970787 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 00:51:29.970804 | orchestrator | 2026-03-10 00:51:29.970820 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-10 00:51:29.970830 | orchestrator | Tuesday 10 March 2026 00:49:31 +0000 (0:00:03.154) 0:00:37.609 ********* 2026-03-10 00:51:29.970840 | orchestrator | [WARNING]: Skipped 2026-03-10 00:51:29.970850 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-10 00:51:29.970865 | orchestrator | to this access issue: 2026-03-10 00:51:29.970876 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-10 00:51:29.970886 | orchestrator | directory 2026-03-10 00:51:29.970895 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 00:51:29.970904 | orchestrator | 2026-03-10 00:51:29.970914 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-10 00:51:29.970924 | orchestrator | Tuesday 10 March 2026 00:49:33 +0000 (0:00:02.310) 0:00:39.920 ********* 2026-03-10 00:51:29.970933 | orchestrator | [WARNING]: Skipped 2026-03-10 00:51:29.970942 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-10 00:51:29.970952 | orchestrator | to this access issue: 2026-03-10 00:51:29.970961 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-10 00:51:29.970971 | orchestrator | directory 2026-03-10 00:51:29.970980 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 00:51:29.970990 | orchestrator | 2026-03-10 00:51:29.971000 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-10 00:51:29.971026 | orchestrator | Tuesday 10 March 2026 00:49:34 +0000 (0:00:01.308) 0:00:41.229 ********* 2026-03-10 00:51:29.971037 | orchestrator | [WARNING]: Skipped 2026-03-10 00:51:29.971046 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-10 00:51:29.971065 | orchestrator | to this access issue: 2026-03-10 00:51:29.971075 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-10 00:51:29.971084 | orchestrator | directory 2026-03-10 00:51:29.971094 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 00:51:29.971105 | orchestrator | 2026-03-10 00:51:29.971114 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-10 00:51:29.971124 | orchestrator | Tuesday 10 March 2026 00:49:36 +0000 (0:00:01.919) 0:00:43.148 ********* 2026-03-10 00:51:29.971133 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:29.971143 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:29.971153 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:29.971162 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:29.971172 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:29.971182 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:29.971191 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:29.971201 | orchestrator | 2026-03-10 00:51:29.971211 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-10 00:51:29.971220 | orchestrator | Tuesday 10 March 2026 00:49:43 +0000 (0:00:06.863) 0:00:50.012 ********* 2026-03-10 00:51:29.971230 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:51:29.971240 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:51:29.971250 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:51:29.971260 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:51:29.971270 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:51:29.971280 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:51:29.971289 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-10 00:51:29.971299 | orchestrator | 2026-03-10 00:51:29.971309 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-10 00:51:29.971359 | orchestrator | Tuesday 10 March 2026 00:49:48 +0000 (0:00:04.566) 0:00:54.579 ********* 2026-03-10 00:51:29.971370 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:29.971380 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:29.971390 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:29.971399 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:29.971409 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:29.971419 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:29.971428 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:29.971438 | orchestrator | 2026-03-10 00:51:29.971458 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-10 00:51:29.971469 | orchestrator | Tuesday 10 March 2026 00:49:52 +0000 (0:00:04.266) 0:00:58.845 ********* 2026-03-10 00:51:29.971479 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.971490 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.971515 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.971540 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.971552 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.971562 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.971572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.971588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.971599 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.971623 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.971634 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.971644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.971654 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.971664 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.971675 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.971690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.971700 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.971726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:51:29.971737 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.971748 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.971758 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.971774 | orchestrator | 2026-03-10 00:51:29.971790 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-10 00:51:29.971806 | orchestrator | Tuesday 10 March 2026 00:49:55 +0000 (0:00:03.337) 0:01:02.183 ********* 2026-03-10 00:51:29.971830 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:51:29.971851 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:51:29.971868 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:51:29.971883 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:51:29.971899 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:51:29.971914 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:51:29.971928 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-10 00:51:29.971945 | orchestrator | 2026-03-10 00:51:29.971961 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-10 00:51:29.971977 | orchestrator | Tuesday 10 March 2026 00:50:01 +0000 (0:00:05.183) 0:01:07.367 ********* 2026-03-10 00:51:29.971993 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:51:29.972007 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:51:29.972046 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:51:29.972071 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:51:29.972098 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:51:29.972115 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:51:29.972129 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-10 00:51:29.972139 | orchestrator | 2026-03-10 00:51:29.972149 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-10 00:51:29.972209 | orchestrator | Tuesday 10 March 2026 00:50:03 +0000 (0:00:02.960) 0:01:10.327 ********* 2026-03-10 00:51:29.972221 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.972243 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.972280 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.972292 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.972303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972314 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.972338 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972377 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972387 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.972398 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-10 00:51:29.972408 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972419 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972451 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972474 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972485 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972495 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972505 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972515 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:51:29.972525 | orchestrator | 2026-03-10 00:51:29.972535 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-10 00:51:29.972552 | orchestrator | Tuesday 10 March 2026 00:50:08 +0000 (0:00:04.977) 0:01:15.305 ********* 2026-03-10 00:51:29.972563 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:29.972573 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:29.972582 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:29.972592 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:29.972601 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:29.972611 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:29.972620 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:29.972630 | orchestrator | 2026-03-10 00:51:29.972639 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-10 00:51:29.972649 | orchestrator | Tuesday 10 March 2026 00:50:10 +0000 (0:00:01.849) 0:01:17.154 ********* 2026-03-10 00:51:29.972659 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:29.972668 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:29.972678 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:29.972687 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:29.972697 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:29.972706 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:29.972716 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:29.972725 | orchestrator | 2026-03-10 00:51:29.972735 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:51:29.972744 | orchestrator | Tuesday 10 March 2026 00:50:12 +0000 (0:00:01.286) 0:01:18.440 ********* 2026-03-10 00:51:29.972754 | orchestrator | 2026-03-10 00:51:29.972764 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:51:29.972774 | orchestrator | Tuesday 10 March 2026 00:50:12 +0000 (0:00:00.069) 0:01:18.510 ********* 2026-03-10 00:51:29.972783 | orchestrator | 2026-03-10 00:51:29.972793 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:51:29.972803 | orchestrator | Tuesday 10 March 2026 00:50:12 +0000 (0:00:00.079) 0:01:18.590 ********* 2026-03-10 00:51:29.972812 | orchestrator | 2026-03-10 00:51:29.972822 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:51:29.972831 | orchestrator | Tuesday 10 March 2026 00:50:12 +0000 (0:00:00.212) 0:01:18.802 ********* 2026-03-10 00:51:29.972841 | orchestrator | 2026-03-10 00:51:29.972851 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:51:29.972860 | orchestrator | Tuesday 10 March 2026 00:50:12 +0000 (0:00:00.064) 0:01:18.866 ********* 2026-03-10 00:51:29.972870 | orchestrator | 2026-03-10 00:51:29.972879 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:51:29.972890 | orchestrator | Tuesday 10 March 2026 00:50:12 +0000 (0:00:00.077) 0:01:18.944 ********* 2026-03-10 00:51:29.972899 | orchestrator | 2026-03-10 00:51:29.972908 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-10 00:51:29.972918 | orchestrator | Tuesday 10 March 2026 00:50:12 +0000 (0:00:00.064) 0:01:19.009 ********* 2026-03-10 00:51:29.972928 | orchestrator | 2026-03-10 00:51:29.972937 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-10 00:51:29.972953 | orchestrator | Tuesday 10 March 2026 00:50:12 +0000 (0:00:00.084) 0:01:19.094 ********* 2026-03-10 00:51:29.972963 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:29.972973 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:29.972982 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:29.972992 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:29.973001 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:29.973033 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:29.973045 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:29.973054 | orchestrator | 2026-03-10 00:51:29.973064 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-10 00:51:29.973073 | orchestrator | Tuesday 10 March 2026 00:50:46 +0000 (0:00:34.193) 0:01:53.288 ********* 2026-03-10 00:51:29.973084 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:29.973101 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:29.973111 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:29.973120 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:29.973129 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:29.973142 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:29.973157 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:29.973180 | orchestrator | 2026-03-10 00:51:29.973218 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-10 00:51:29.973233 | orchestrator | Tuesday 10 March 2026 00:51:15 +0000 (0:00:28.521) 0:02:21.809 ********* 2026-03-10 00:51:29.973247 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:51:29.973263 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:51:29.973279 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:51:29.973293 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:51:29.973309 | orchestrator | ok: [testbed-manager] 2026-03-10 00:51:29.973324 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:51:29.973339 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:51:29.973356 | orchestrator | 2026-03-10 00:51:29.973371 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-10 00:51:29.973386 | orchestrator | Tuesday 10 March 2026 00:51:18 +0000 (0:00:02.829) 0:02:24.639 ********* 2026-03-10 00:51:29.973403 | orchestrator | changed: [testbed-manager] 2026-03-10 00:51:29.973420 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:51:29.973435 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:51:29.973449 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:51:29.973465 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:51:29.973480 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:51:29.973496 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:51:29.973511 | orchestrator | 2026-03-10 00:51:29.973528 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:51:29.973545 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:51:29.973562 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:51:29.973578 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:51:29.973594 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:51:29.973610 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:51:29.973627 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:51:29.973642 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-10 00:51:29.973657 | orchestrator | 2026-03-10 00:51:29.973672 | orchestrator | 2026-03-10 00:51:29.973688 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:51:29.973717 | orchestrator | Tuesday 10 March 2026 00:51:29 +0000 (0:00:10.716) 0:02:35.356 ********* 2026-03-10 00:51:29.973734 | orchestrator | =============================================================================== 2026-03-10 00:51:29.973772 | orchestrator | common : Restart fluentd container ------------------------------------- 34.19s 2026-03-10 00:51:29.973788 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 28.52s 2026-03-10 00:51:29.973804 | orchestrator | common : Restart cron container ---------------------------------------- 10.72s 2026-03-10 00:51:29.973819 | orchestrator | common : Copying over config.json files for services -------------------- 9.59s 2026-03-10 00:51:29.973847 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 7.52s 2026-03-10 00:51:29.973863 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 6.86s 2026-03-10 00:51:29.973879 | orchestrator | common : Ensuring config directories exist ------------------------------ 6.30s 2026-03-10 00:51:29.973895 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 5.17s 2026-03-10 00:51:29.973911 | orchestrator | common : Check common containers ---------------------------------------- 4.98s 2026-03-10 00:51:29.973927 | orchestrator | common : Copying over cron logrotate config file ------------------------ 4.57s 2026-03-10 00:51:29.973941 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 4.27s 2026-03-10 00:51:29.973957 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.35s 2026-03-10 00:51:29.973972 | orchestrator | common : Find custom fluentd input config files ------------------------- 3.15s 2026-03-10 00:51:29.973986 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.96s 2026-03-10 00:51:29.974138 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.83s 2026-03-10 00:51:29.974166 | orchestrator | common : Find custom fluentd filter config files ------------------------ 2.31s 2026-03-10 00:51:29.974183 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.23s 2026-03-10 00:51:29.974200 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.15s 2026-03-10 00:51:29.974218 | orchestrator | common : Find custom fluentd output config files ------------------------ 1.92s 2026-03-10 00:51:29.974236 | orchestrator | common : include_tasks -------------------------------------------------- 1.91s 2026-03-10 00:51:29.974252 | orchestrator | 2026-03-10 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:33.014253 | orchestrator | 2026-03-10 00:51:33 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:33.017730 | orchestrator | 2026-03-10 00:51:33 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:51:33.018217 | orchestrator | 2026-03-10 00:51:33 | INFO  | Task b561c15b-863b-4d4f-9f1f-6508c60a4595 is in state STARTED 2026-03-10 00:51:33.022348 | orchestrator | 2026-03-10 00:51:33 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:33.023165 | orchestrator | 2026-03-10 00:51:33 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:51:33.024275 | orchestrator | 2026-03-10 00:51:33 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state STARTED 2026-03-10 00:51:33.024301 | orchestrator | 2026-03-10 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:36.072403 | orchestrator | 2026-03-10 00:51:36 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:36.074887 | orchestrator | 2026-03-10 00:51:36 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:51:36.074980 | orchestrator | 2026-03-10 00:51:36 | INFO  | Task b561c15b-863b-4d4f-9f1f-6508c60a4595 is in state STARTED 2026-03-10 00:51:36.076808 | orchestrator | 2026-03-10 00:51:36 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:36.078230 | orchestrator | 2026-03-10 00:51:36 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:51:36.080959 | orchestrator | 2026-03-10 00:51:36 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state STARTED 2026-03-10 00:51:36.081026 | orchestrator | 2026-03-10 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:39.136444 | orchestrator | 2026-03-10 00:51:39 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:39.137336 | orchestrator | 2026-03-10 00:51:39 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:51:39.137987 | orchestrator | 2026-03-10 00:51:39 | INFO  | Task b561c15b-863b-4d4f-9f1f-6508c60a4595 is in state STARTED 2026-03-10 00:51:39.138595 | orchestrator | 2026-03-10 00:51:39 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:39.139652 | orchestrator | 2026-03-10 00:51:39 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:51:39.141761 | orchestrator | 2026-03-10 00:51:39 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state STARTED 2026-03-10 00:51:39.141813 | orchestrator | 2026-03-10 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:42.178748 | orchestrator | 2026-03-10 00:51:42 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:42.180935 | orchestrator | 2026-03-10 00:51:42 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:51:42.181633 | orchestrator | 2026-03-10 00:51:42 | INFO  | Task b561c15b-863b-4d4f-9f1f-6508c60a4595 is in state STARTED 2026-03-10 00:51:42.182664 | orchestrator | 2026-03-10 00:51:42 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:42.183881 | orchestrator | 2026-03-10 00:51:42 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:51:42.186663 | orchestrator | 2026-03-10 00:51:42 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state STARTED 2026-03-10 00:51:42.186688 | orchestrator | 2026-03-10 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:45.226736 | orchestrator | 2026-03-10 00:51:45 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:45.226806 | orchestrator | 2026-03-10 00:51:45 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:51:45.226813 | orchestrator | 2026-03-10 00:51:45 | INFO  | Task b561c15b-863b-4d4f-9f1f-6508c60a4595 is in state STARTED 2026-03-10 00:51:45.227153 | orchestrator | 2026-03-10 00:51:45 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:45.228317 | orchestrator | 2026-03-10 00:51:45 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:51:45.229328 | orchestrator | 2026-03-10 00:51:45 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state STARTED 2026-03-10 00:51:45.229346 | orchestrator | 2026-03-10 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:48.325523 | orchestrator | 2026-03-10 00:51:48 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:48.327354 | orchestrator | 2026-03-10 00:51:48 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:51:48.329055 | orchestrator | 2026-03-10 00:51:48 | INFO  | Task b561c15b-863b-4d4f-9f1f-6508c60a4595 is in state STARTED 2026-03-10 00:51:48.329727 | orchestrator | 2026-03-10 00:51:48 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:48.331619 | orchestrator | 2026-03-10 00:51:48 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:51:48.336978 | orchestrator | 2026-03-10 00:51:48 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state STARTED 2026-03-10 00:51:48.337062 | orchestrator | 2026-03-10 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:51.394582 | orchestrator | 2026-03-10 00:51:51 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:51.395212 | orchestrator | 2026-03-10 00:51:51 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:51:51.395837 | orchestrator | 2026-03-10 00:51:51 | INFO  | Task b561c15b-863b-4d4f-9f1f-6508c60a4595 is in state SUCCESS 2026-03-10 00:51:51.396558 | orchestrator | 2026-03-10 00:51:51 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:51.399285 | orchestrator | 2026-03-10 00:51:51 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:51:51.400265 | orchestrator | 2026-03-10 00:51:51 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:51:51.404491 | orchestrator | 2026-03-10 00:51:51 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state STARTED 2026-03-10 00:51:51.404553 | orchestrator | 2026-03-10 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:54.470412 | orchestrator | 2026-03-10 00:51:54 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:54.473170 | orchestrator | 2026-03-10 00:51:54 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:51:54.473591 | orchestrator | 2026-03-10 00:51:54 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:54.475290 | orchestrator | 2026-03-10 00:51:54 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:51:54.475349 | orchestrator | 2026-03-10 00:51:54 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:51:54.476190 | orchestrator | 2026-03-10 00:51:54 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state STARTED 2026-03-10 00:51:54.476208 | orchestrator | 2026-03-10 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:51:57.572892 | orchestrator | 2026-03-10 00:51:57 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:51:57.580566 | orchestrator | 2026-03-10 00:51:57 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:51:57.580891 | orchestrator | 2026-03-10 00:51:57 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:51:57.584921 | orchestrator | 2026-03-10 00:51:57 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:51:57.585080 | orchestrator | 2026-03-10 00:51:57 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:51:57.587886 | orchestrator | 2026-03-10 00:51:57 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state STARTED 2026-03-10 00:51:57.587925 | orchestrator | 2026-03-10 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:00.674543 | orchestrator | 2026-03-10 00:52:00 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:00.674641 | orchestrator | 2026-03-10 00:52:00 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:00.674659 | orchestrator | 2026-03-10 00:52:00 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:00.674677 | orchestrator | 2026-03-10 00:52:00 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:00.675276 | orchestrator | 2026-03-10 00:52:00 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:00.676622 | orchestrator | 2026-03-10 00:52:00 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state STARTED 2026-03-10 00:52:00.676702 | orchestrator | 2026-03-10 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:03.808402 | orchestrator | 2026-03-10 00:52:03 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:03.810106 | orchestrator | 2026-03-10 00:52:03 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:03.812420 | orchestrator | 2026-03-10 00:52:03 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:03.813702 | orchestrator | 2026-03-10 00:52:03 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:03.815051 | orchestrator | 2026-03-10 00:52:03 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:03.816330 | orchestrator | 2026-03-10 00:52:03 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state STARTED 2026-03-10 00:52:03.816385 | orchestrator | 2026-03-10 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:07.045889 | orchestrator | 2026-03-10 00:52:06 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:07.046070 | orchestrator | 2026-03-10 00:52:07 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:07.046084 | orchestrator | 2026-03-10 00:52:07 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:07.046092 | orchestrator | 2026-03-10 00:52:07 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:07.060304 | orchestrator | 2026-03-10 00:52:07 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:07.070782 | orchestrator | 2026-03-10 00:52:07 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state STARTED 2026-03-10 00:52:07.070851 | orchestrator | 2026-03-10 00:52:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:10.241624 | orchestrator | 2026-03-10 00:52:10 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:10.241735 | orchestrator | 2026-03-10 00:52:10 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:10.241757 | orchestrator | 2026-03-10 00:52:10 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:10.241768 | orchestrator | 2026-03-10 00:52:10 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:10.241804 | orchestrator | 2026-03-10 00:52:10 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:10.241823 | orchestrator | 2026-03-10 00:52:10 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state STARTED 2026-03-10 00:52:10.241841 | orchestrator | 2026-03-10 00:52:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:13.418200 | orchestrator | 2026-03-10 00:52:13 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:13.419314 | orchestrator | 2026-03-10 00:52:13 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:13.419384 | orchestrator | 2026-03-10 00:52:13 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:13.420012 | orchestrator | 2026-03-10 00:52:13 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:13.421160 | orchestrator | 2026-03-10 00:52:13 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:13.422897 | orchestrator | 2026-03-10 00:52:13 | INFO  | Task 002d1203-c148-4643-bfe1-483f83e01c66 is in state SUCCESS 2026-03-10 00:52:13.422939 | orchestrator | 2026-03-10 00:52:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:13.424183 | orchestrator | 2026-03-10 00:52:13.424221 | orchestrator | 2026-03-10 00:52:13.424229 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:52:13.424255 | orchestrator | 2026-03-10 00:52:13.424263 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:52:13.424270 | orchestrator | Tuesday 10 March 2026 00:51:36 +0000 (0:00:00.363) 0:00:00.363 ********* 2026-03-10 00:52:13.424277 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:13.424286 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:13.424292 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:13.424299 | orchestrator | 2026-03-10 00:52:13.424306 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:52:13.424313 | orchestrator | Tuesday 10 March 2026 00:51:36 +0000 (0:00:00.599) 0:00:00.962 ********* 2026-03-10 00:52:13.424321 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-10 00:52:13.424328 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-10 00:52:13.424335 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-10 00:52:13.424342 | orchestrator | 2026-03-10 00:52:13.424349 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-10 00:52:13.424356 | orchestrator | 2026-03-10 00:52:13.424363 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-10 00:52:13.424370 | orchestrator | Tuesday 10 March 2026 00:51:37 +0000 (0:00:00.612) 0:00:01.575 ********* 2026-03-10 00:52:13.424377 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:52:13.424385 | orchestrator | 2026-03-10 00:52:13.424392 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-10 00:52:13.424399 | orchestrator | Tuesday 10 March 2026 00:51:38 +0000 (0:00:00.945) 0:00:02.520 ********* 2026-03-10 00:52:13.424405 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-10 00:52:13.424411 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-10 00:52:13.424417 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-10 00:52:13.424424 | orchestrator | 2026-03-10 00:52:13.424430 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-10 00:52:13.424436 | orchestrator | Tuesday 10 March 2026 00:51:39 +0000 (0:00:00.883) 0:00:03.403 ********* 2026-03-10 00:52:13.424442 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-10 00:52:13.424448 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-10 00:52:13.424454 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-10 00:52:13.424460 | orchestrator | 2026-03-10 00:52:13.424465 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-10 00:52:13.424470 | orchestrator | Tuesday 10 March 2026 00:51:42 +0000 (0:00:02.912) 0:00:06.316 ********* 2026-03-10 00:52:13.424476 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:13.424481 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:13.424487 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:13.424493 | orchestrator | 2026-03-10 00:52:13.424499 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-10 00:52:13.424505 | orchestrator | Tuesday 10 March 2026 00:51:44 +0000 (0:00:02.180) 0:00:08.497 ********* 2026-03-10 00:52:13.424511 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:13.424517 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:13.424523 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:13.424529 | orchestrator | 2026-03-10 00:52:13.424535 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:52:13.424541 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:52:13.424549 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:52:13.424555 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:52:13.424568 | orchestrator | 2026-03-10 00:52:13.424574 | orchestrator | 2026-03-10 00:52:13.424580 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:52:13.424586 | orchestrator | Tuesday 10 March 2026 00:51:48 +0000 (0:00:03.860) 0:00:12.357 ********* 2026-03-10 00:52:13.424592 | orchestrator | =============================================================================== 2026-03-10 00:52:13.424604 | orchestrator | memcached : Restart memcached container --------------------------------- 3.86s 2026-03-10 00:52:13.424610 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.91s 2026-03-10 00:52:13.424616 | orchestrator | memcached : Check memcached container ----------------------------------- 2.18s 2026-03-10 00:52:13.424621 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.95s 2026-03-10 00:52:13.424627 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.88s 2026-03-10 00:52:13.424633 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.61s 2026-03-10 00:52:13.424639 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.60s 2026-03-10 00:52:13.424644 | orchestrator | 2026-03-10 00:52:13.424650 | orchestrator | 2026-03-10 00:52:13.424657 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:52:13.424663 | orchestrator | 2026-03-10 00:52:13.424669 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:52:13.424675 | orchestrator | Tuesday 10 March 2026 00:51:36 +0000 (0:00:00.332) 0:00:00.333 ********* 2026-03-10 00:52:13.424681 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:13.424687 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:13.424694 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:13.424700 | orchestrator | 2026-03-10 00:52:13.424706 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:52:13.424723 | orchestrator | Tuesday 10 March 2026 00:51:36 +0000 (0:00:00.432) 0:00:00.765 ********* 2026-03-10 00:52:13.424730 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-10 00:52:13.424736 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-10 00:52:13.424743 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-10 00:52:13.424749 | orchestrator | 2026-03-10 00:52:13.424755 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-10 00:52:13.424762 | orchestrator | 2026-03-10 00:52:13.424768 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-10 00:52:13.424774 | orchestrator | Tuesday 10 March 2026 00:51:37 +0000 (0:00:00.672) 0:00:01.438 ********* 2026-03-10 00:52:13.424780 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:52:13.424785 | orchestrator | 2026-03-10 00:52:13.424792 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-10 00:52:13.424798 | orchestrator | Tuesday 10 March 2026 00:51:37 +0000 (0:00:00.562) 0:00:02.001 ********* 2026-03-10 00:52:13.424808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.424820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.424833 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.424845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.424852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.424866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.424872 | orchestrator | 2026-03-10 00:52:13.424878 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-10 00:52:13.424885 | orchestrator | Tuesday 10 March 2026 00:51:39 +0000 (0:00:01.598) 0:00:03.599 ********* 2026-03-10 00:52:13.424891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.424899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.424911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.424917 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.424927 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.424938 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.424944 | orchestrator | 2026-03-10 00:52:13.424950 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-10 00:52:13.424977 | orchestrator | Tuesday 10 March 2026 00:51:43 +0000 (0:00:03.766) 0:00:07.365 ********* 2026-03-10 00:52:13.424985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.424991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.425003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.425009 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.425019 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.425025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.425032 | orchestrator | 2026-03-10 00:52:13.425042 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-10 00:52:13.425048 | orchestrator | Tuesday 10 March 2026 00:51:46 +0000 (0:00:03.450) 0:00:10.816 ********* 2026-03-10 00:52:13.425054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.425060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.425071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.425078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.425087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.425094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-10 00:52:13.425101 | orchestrator | 2026-03-10 00:52:13.425107 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-10 00:52:13.425117 | orchestrator | Tuesday 10 March 2026 00:51:48 +0000 (0:00:02.392) 0:00:13.209 ********* 2026-03-10 00:52:13.425124 | orchestrator | 2026-03-10 00:52:13.425130 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-10 00:52:13.425140 | orchestrator | Tuesday 10 March 2026 00:51:48 +0000 (0:00:00.071) 0:00:13.281 ********* 2026-03-10 00:52:13.425147 | orchestrator | 2026-03-10 00:52:13.425153 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-10 00:52:13.425159 | orchestrator | Tuesday 10 March 2026 00:51:49 +0000 (0:00:00.184) 0:00:13.465 ********* 2026-03-10 00:52:13.425166 | orchestrator | 2026-03-10 00:52:13.425172 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-10 00:52:13.425179 | orchestrator | Tuesday 10 March 2026 00:51:49 +0000 (0:00:00.363) 0:00:13.829 ********* 2026-03-10 00:52:13.425185 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:13.425197 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:13.425204 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:13.425210 | orchestrator | 2026-03-10 00:52:13.425217 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-10 00:52:13.425224 | orchestrator | Tuesday 10 March 2026 00:52:02 +0000 (0:00:12.587) 0:00:26.417 ********* 2026-03-10 00:52:13.425230 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:13.425237 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:13.425243 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:13.425249 | orchestrator | 2026-03-10 00:52:13.425256 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:52:13.425262 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:52:13.425274 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:52:13.425283 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:52:13.425289 | orchestrator | 2026-03-10 00:52:13.425295 | orchestrator | 2026-03-10 00:52:13.425302 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:52:13.425308 | orchestrator | Tuesday 10 March 2026 00:52:11 +0000 (0:00:09.765) 0:00:36.182 ********* 2026-03-10 00:52:13.425314 | orchestrator | =============================================================================== 2026-03-10 00:52:13.425320 | orchestrator | redis : Restart redis container ---------------------------------------- 12.59s 2026-03-10 00:52:13.425325 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 9.77s 2026-03-10 00:52:13.425332 | orchestrator | redis : Copying over default config.json files -------------------------- 3.77s 2026-03-10 00:52:13.425338 | orchestrator | redis : Copying over redis config files --------------------------------- 3.45s 2026-03-10 00:52:13.425344 | orchestrator | redis : Check redis containers ------------------------------------------ 2.39s 2026-03-10 00:52:13.425350 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.60s 2026-03-10 00:52:13.425356 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.67s 2026-03-10 00:52:13.425362 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.62s 2026-03-10 00:52:13.425368 | orchestrator | redis : include_tasks --------------------------------------------------- 0.56s 2026-03-10 00:52:13.425374 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2026-03-10 00:52:16.470389 | orchestrator | 2026-03-10 00:52:16 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:16.470820 | orchestrator | 2026-03-10 00:52:16 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:16.472620 | orchestrator | 2026-03-10 00:52:16 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:16.474772 | orchestrator | 2026-03-10 00:52:16 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:16.476270 | orchestrator | 2026-03-10 00:52:16 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:16.476371 | orchestrator | 2026-03-10 00:52:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:19.528558 | orchestrator | 2026-03-10 00:52:19 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:19.529235 | orchestrator | 2026-03-10 00:52:19 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:19.530613 | orchestrator | 2026-03-10 00:52:19 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:19.532170 | orchestrator | 2026-03-10 00:52:19 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:19.534405 | orchestrator | 2026-03-10 00:52:19 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:19.534781 | orchestrator | 2026-03-10 00:52:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:22.607786 | orchestrator | 2026-03-10 00:52:22 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:22.610647 | orchestrator | 2026-03-10 00:52:22 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:22.612465 | orchestrator | 2026-03-10 00:52:22 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:22.614620 | orchestrator | 2026-03-10 00:52:22 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:22.617113 | orchestrator | 2026-03-10 00:52:22 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:22.617160 | orchestrator | 2026-03-10 00:52:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:25.730327 | orchestrator | 2026-03-10 00:52:25 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:25.733858 | orchestrator | 2026-03-10 00:52:25 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:25.733932 | orchestrator | 2026-03-10 00:52:25 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:25.736138 | orchestrator | 2026-03-10 00:52:25 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:25.736553 | orchestrator | 2026-03-10 00:52:25 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:25.736585 | orchestrator | 2026-03-10 00:52:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:28.808499 | orchestrator | 2026-03-10 00:52:28 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:28.808715 | orchestrator | 2026-03-10 00:52:28 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:28.814891 | orchestrator | 2026-03-10 00:52:28 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:28.815693 | orchestrator | 2026-03-10 00:52:28 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:28.816813 | orchestrator | 2026-03-10 00:52:28 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:28.816881 | orchestrator | 2026-03-10 00:52:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:31.900290 | orchestrator | 2026-03-10 00:52:31 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:31.904276 | orchestrator | 2026-03-10 00:52:31 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:31.905175 | orchestrator | 2026-03-10 00:52:31 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:31.905754 | orchestrator | 2026-03-10 00:52:31 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:31.910987 | orchestrator | 2026-03-10 00:52:31 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:31.911054 | orchestrator | 2026-03-10 00:52:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:34.986249 | orchestrator | 2026-03-10 00:52:34 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:34.987269 | orchestrator | 2026-03-10 00:52:34 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:34.989942 | orchestrator | 2026-03-10 00:52:34 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:34.991434 | orchestrator | 2026-03-10 00:52:34 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:34.994534 | orchestrator | 2026-03-10 00:52:34 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:34.994578 | orchestrator | 2026-03-10 00:52:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:38.041572 | orchestrator | 2026-03-10 00:52:38 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:38.043997 | orchestrator | 2026-03-10 00:52:38 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:38.045045 | orchestrator | 2026-03-10 00:52:38 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:38.046755 | orchestrator | 2026-03-10 00:52:38 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:38.048798 | orchestrator | 2026-03-10 00:52:38 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:38.048837 | orchestrator | 2026-03-10 00:52:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:41.099905 | orchestrator | 2026-03-10 00:52:41 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:41.104682 | orchestrator | 2026-03-10 00:52:41 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:41.105295 | orchestrator | 2026-03-10 00:52:41 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:41.106550 | orchestrator | 2026-03-10 00:52:41 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:41.107279 | orchestrator | 2026-03-10 00:52:41 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:41.107342 | orchestrator | 2026-03-10 00:52:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:44.155594 | orchestrator | 2026-03-10 00:52:44 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:44.156411 | orchestrator | 2026-03-10 00:52:44 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:44.157085 | orchestrator | 2026-03-10 00:52:44 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:44.158141 | orchestrator | 2026-03-10 00:52:44 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:44.158795 | orchestrator | 2026-03-10 00:52:44 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:44.159079 | orchestrator | 2026-03-10 00:52:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:47.197628 | orchestrator | 2026-03-10 00:52:47 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:47.198704 | orchestrator | 2026-03-10 00:52:47 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:47.200542 | orchestrator | 2026-03-10 00:52:47 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:47.202286 | orchestrator | 2026-03-10 00:52:47 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:47.204558 | orchestrator | 2026-03-10 00:52:47 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:47.204704 | orchestrator | 2026-03-10 00:52:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:50.278299 | orchestrator | 2026-03-10 00:52:50 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:50.310449 | orchestrator | 2026-03-10 00:52:50 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:50.314229 | orchestrator | 2026-03-10 00:52:50 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:50.316130 | orchestrator | 2026-03-10 00:52:50 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:50.318531 | orchestrator | 2026-03-10 00:52:50 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:50.318647 | orchestrator | 2026-03-10 00:52:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:53.431701 | orchestrator | 2026-03-10 00:52:53 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:53.431784 | orchestrator | 2026-03-10 00:52:53 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:53.431793 | orchestrator | 2026-03-10 00:52:53 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:53.431801 | orchestrator | 2026-03-10 00:52:53 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:53.431825 | orchestrator | 2026-03-10 00:52:53 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:53.431833 | orchestrator | 2026-03-10 00:52:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:56.467507 | orchestrator | 2026-03-10 00:52:56 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:56.467975 | orchestrator | 2026-03-10 00:52:56 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:56.469003 | orchestrator | 2026-03-10 00:52:56 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:56.469853 | orchestrator | 2026-03-10 00:52:56 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state STARTED 2026-03-10 00:52:56.470963 | orchestrator | 2026-03-10 00:52:56 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:56.471087 | orchestrator | 2026-03-10 00:52:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:52:59.505493 | orchestrator | 2026-03-10 00:52:59 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:52:59.506185 | orchestrator | 2026-03-10 00:52:59 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:52:59.506573 | orchestrator | 2026-03-10 00:52:59 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:52:59.508078 | orchestrator | 2026-03-10 00:52:59 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:52:59.509060 | orchestrator | 2026-03-10 00:52:59 | INFO  | Task 430bfb12-0e48-4f9c-bcd8-2d65f6b76437 is in state SUCCESS 2026-03-10 00:52:59.511792 | orchestrator | 2026-03-10 00:52:59.511831 | orchestrator | 2026-03-10 00:52:59.511844 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:52:59.511856 | orchestrator | 2026-03-10 00:52:59.511868 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:52:59.511880 | orchestrator | Tuesday 10 March 2026 00:51:36 +0000 (0:00:00.484) 0:00:00.484 ********* 2026-03-10 00:52:59.511917 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:59.511933 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:59.511944 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:59.511955 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:52:59.511983 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:52:59.511994 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:52:59.512005 | orchestrator | 2026-03-10 00:52:59.512016 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:52:59.512026 | orchestrator | Tuesday 10 March 2026 00:51:37 +0000 (0:00:00.962) 0:00:01.446 ********* 2026-03-10 00:52:59.512037 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-10 00:52:59.512048 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-10 00:52:59.512058 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-10 00:52:59.512069 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-10 00:52:59.512080 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-10 00:52:59.512090 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-10 00:52:59.512101 | orchestrator | 2026-03-10 00:52:59.512111 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-10 00:52:59.512122 | orchestrator | 2026-03-10 00:52:59.512133 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-10 00:52:59.512143 | orchestrator | Tuesday 10 March 2026 00:51:38 +0000 (0:00:01.036) 0:00:02.483 ********* 2026-03-10 00:52:59.512155 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:52:59.512167 | orchestrator | 2026-03-10 00:52:59.512177 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-10 00:52:59.512188 | orchestrator | Tuesday 10 March 2026 00:51:40 +0000 (0:00:02.238) 0:00:04.722 ********* 2026-03-10 00:52:59.512198 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-10 00:52:59.512209 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-10 00:52:59.512220 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-10 00:52:59.512231 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-10 00:52:59.512241 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-10 00:52:59.512252 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-10 00:52:59.512263 | orchestrator | 2026-03-10 00:52:59.512273 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-10 00:52:59.512284 | orchestrator | Tuesday 10 March 2026 00:51:42 +0000 (0:00:01.812) 0:00:06.534 ********* 2026-03-10 00:52:59.512294 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-10 00:52:59.512305 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-10 00:52:59.512316 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-10 00:52:59.512327 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-10 00:52:59.512338 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-10 00:52:59.512348 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-10 00:52:59.512359 | orchestrator | 2026-03-10 00:52:59.512370 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-10 00:52:59.512388 | orchestrator | Tuesday 10 March 2026 00:51:44 +0000 (0:00:01.822) 0:00:08.357 ********* 2026-03-10 00:52:59.512402 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-10 00:52:59.512415 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-10 00:52:59.512427 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:59.512440 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-10 00:52:59.512453 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:59.512465 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-10 00:52:59.512478 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:59.512489 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-10 00:52:59.512508 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:52:59.512520 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:52:59.512532 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-10 00:52:59.512545 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:52:59.512557 | orchestrator | 2026-03-10 00:52:59.512569 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-10 00:52:59.512582 | orchestrator | Tuesday 10 March 2026 00:51:46 +0000 (0:00:01.965) 0:00:10.323 ********* 2026-03-10 00:52:59.512594 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:59.512606 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:59.512618 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:59.512631 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:52:59.512643 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:52:59.512655 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:52:59.512666 | orchestrator | 2026-03-10 00:52:59.512679 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-10 00:52:59.512691 | orchestrator | Tuesday 10 March 2026 00:51:48 +0000 (0:00:01.841) 0:00:12.164 ********* 2026-03-10 00:52:59.512723 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.512740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.512754 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.512766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.512787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.512799 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.512818 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.512830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.512841 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.512857 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.512874 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.512912 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.512933 | orchestrator | 2026-03-10 00:52:59.512952 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-10 00:52:59.512972 | orchestrator | Tuesday 10 March 2026 00:51:52 +0000 (0:00:04.013) 0:00:16.178 ********* 2026-03-10 00:52:59.512992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513041 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513058 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513120 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513156 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513191 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513209 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513239 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513259 | orchestrator | 2026-03-10 00:52:59.513278 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-10 00:52:59.513298 | orchestrator | Tuesday 10 March 2026 00:51:57 +0000 (0:00:04.649) 0:00:20.828 ********* 2026-03-10 00:52:59.513314 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:52:59.513333 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:52:59.513353 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:52:59.513372 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:52:59.513389 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:52:59.513431 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:52:59.513442 | orchestrator | 2026-03-10 00:52:59.513453 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-10 00:52:59.513464 | orchestrator | Tuesday 10 March 2026 00:51:58 +0000 (0:00:01.878) 0:00:22.707 ********* 2026-03-10 00:52:59.513475 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513524 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513585 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513606 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513619 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513653 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-10 00:52:59.513708 | orchestrator | 2026-03-10 00:52:59.513726 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-10 00:52:59.513745 | orchestrator | Tuesday 10 March 2026 00:52:03 +0000 (0:00:04.771) 0:00:27.478 ********* 2026-03-10 00:52:59.513759 | orchestrator | 2026-03-10 00:52:59.513776 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-10 00:52:59.513795 | orchestrator | Tuesday 10 March 2026 00:52:03 +0000 (0:00:00.190) 0:00:27.669 ********* 2026-03-10 00:52:59.513812 | orchestrator | 2026-03-10 00:52:59.513832 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-10 00:52:59.513851 | orchestrator | Tuesday 10 March 2026 00:52:04 +0000 (0:00:00.192) 0:00:27.861 ********* 2026-03-10 00:52:59.513869 | orchestrator | 2026-03-10 00:52:59.513886 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-10 00:52:59.513918 | orchestrator | Tuesday 10 March 2026 00:52:04 +0000 (0:00:00.230) 0:00:28.092 ********* 2026-03-10 00:52:59.513929 | orchestrator | 2026-03-10 00:52:59.513940 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-10 00:52:59.513951 | orchestrator | Tuesday 10 March 2026 00:52:04 +0000 (0:00:00.384) 0:00:28.477 ********* 2026-03-10 00:52:59.513961 | orchestrator | 2026-03-10 00:52:59.513972 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-10 00:52:59.513983 | orchestrator | Tuesday 10 March 2026 00:52:05 +0000 (0:00:00.679) 0:00:29.156 ********* 2026-03-10 00:52:59.513993 | orchestrator | 2026-03-10 00:52:59.514004 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-10 00:52:59.514015 | orchestrator | Tuesday 10 March 2026 00:52:05 +0000 (0:00:00.387) 0:00:29.544 ********* 2026-03-10 00:52:59.514091 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:59.514102 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:52:59.514113 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:59.514130 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:52:59.514147 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:59.514177 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:52:59.514195 | orchestrator | 2026-03-10 00:52:59.514212 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-10 00:52:59.514229 | orchestrator | Tuesday 10 March 2026 00:52:18 +0000 (0:00:12.842) 0:00:42.386 ********* 2026-03-10 00:52:59.514246 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:52:59.514265 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:52:59.514284 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:52:59.514301 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:52:59.514320 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:52:59.514337 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:52:59.514355 | orchestrator | 2026-03-10 00:52:59.514374 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-10 00:52:59.514394 | orchestrator | Tuesday 10 March 2026 00:52:20 +0000 (0:00:01.810) 0:00:44.196 ********* 2026-03-10 00:52:59.514413 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:59.514432 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:59.514451 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:52:59.514463 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:52:59.514480 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:52:59.514505 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:59.514528 | orchestrator | 2026-03-10 00:52:59.514546 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-10 00:52:59.514565 | orchestrator | Tuesday 10 March 2026 00:52:31 +0000 (0:00:10.969) 0:00:55.166 ********* 2026-03-10 00:52:59.514583 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-10 00:52:59.514601 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-10 00:52:59.514619 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-10 00:52:59.514659 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-10 00:52:59.514678 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-10 00:52:59.514706 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-10 00:52:59.514718 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-10 00:52:59.514729 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-10 00:52:59.514739 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-10 00:52:59.514750 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-10 00:52:59.514761 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-10 00:52:59.514772 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-10 00:52:59.514782 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-10 00:52:59.514793 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-10 00:52:59.514804 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-10 00:52:59.514853 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-10 00:52:59.514876 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-10 00:52:59.514931 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-10 00:52:59.514946 | orchestrator | 2026-03-10 00:52:59.514959 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-10 00:52:59.514973 | orchestrator | Tuesday 10 March 2026 00:52:40 +0000 (0:00:09.335) 0:01:04.502 ********* 2026-03-10 00:52:59.514986 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-10 00:52:59.514999 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:52:59.515011 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-10 00:52:59.515025 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-10 00:52:59.515038 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-10 00:52:59.515050 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:52:59.515063 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:52:59.515075 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-10 00:52:59.515087 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-10 00:52:59.515100 | orchestrator | 2026-03-10 00:52:59.515113 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-10 00:52:59.515125 | orchestrator | Tuesday 10 March 2026 00:52:44 +0000 (0:00:03.586) 0:01:08.089 ********* 2026-03-10 00:52:59.515136 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-10 00:52:59.515147 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:52:59.515157 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-10 00:52:59.515168 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:52:59.515179 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-10 00:52:59.515190 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:52:59.515215 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-10 00:52:59.515227 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-10 00:52:59.515248 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-10 00:52:59.515259 | orchestrator | 2026-03-10 00:52:59.515270 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-10 00:52:59.515281 | orchestrator | Tuesday 10 March 2026 00:52:47 +0000 (0:00:03.462) 0:01:11.551 ********* 2026-03-10 00:52:59.515291 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:52:59.515302 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:52:59.515312 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:52:59.515323 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:52:59.515333 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:52:59.515344 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:52:59.515355 | orchestrator | 2026-03-10 00:52:59.515366 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:52:59.515378 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 00:52:59.515390 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 00:52:59.515401 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 00:52:59.515411 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 00:52:59.515422 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 00:52:59.515442 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 00:52:59.515454 | orchestrator | 2026-03-10 00:52:59.515475 | orchestrator | 2026-03-10 00:52:59.515495 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:52:59.515514 | orchestrator | Tuesday 10 March 2026 00:52:56 +0000 (0:00:08.822) 0:01:20.373 ********* 2026-03-10 00:52:59.515535 | orchestrator | =============================================================================== 2026-03-10 00:52:59.515555 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 19.79s 2026-03-10 00:52:59.515575 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 12.84s 2026-03-10 00:52:59.515592 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 9.34s 2026-03-10 00:52:59.515603 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 4.77s 2026-03-10 00:52:59.515614 | orchestrator | openvswitch : Copying over config.json files for services --------------- 4.65s 2026-03-10 00:52:59.515625 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 4.01s 2026-03-10 00:52:59.515636 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 3.59s 2026-03-10 00:52:59.515646 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.46s 2026-03-10 00:52:59.515657 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.24s 2026-03-10 00:52:59.515668 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 2.06s 2026-03-10 00:52:59.515678 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.97s 2026-03-10 00:52:59.515689 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.88s 2026-03-10 00:52:59.515699 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 1.84s 2026-03-10 00:52:59.515710 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.82s 2026-03-10 00:52:59.515721 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.81s 2026-03-10 00:52:59.515731 | orchestrator | module-load : Load modules ---------------------------------------------- 1.81s 2026-03-10 00:52:59.515750 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.04s 2026-03-10 00:52:59.515761 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.96s 2026-03-10 00:52:59.515772 | orchestrator | 2026-03-10 00:52:59 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:52:59.515783 | orchestrator | 2026-03-10 00:52:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:02.551150 | orchestrator | 2026-03-10 00:53:02 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:02.551236 | orchestrator | 2026-03-10 00:53:02 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:02.552508 | orchestrator | 2026-03-10 00:53:02 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:02.554088 | orchestrator | 2026-03-10 00:53:02 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:02.555815 | orchestrator | 2026-03-10 00:53:02 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:02.556067 | orchestrator | 2026-03-10 00:53:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:05.598548 | orchestrator | 2026-03-10 00:53:05 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:05.598770 | orchestrator | 2026-03-10 00:53:05 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:05.599594 | orchestrator | 2026-03-10 00:53:05 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:05.600566 | orchestrator | 2026-03-10 00:53:05 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:05.601612 | orchestrator | 2026-03-10 00:53:05 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:05.601642 | orchestrator | 2026-03-10 00:53:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:08.638166 | orchestrator | 2026-03-10 00:53:08 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:08.639771 | orchestrator | 2026-03-10 00:53:08 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:08.641521 | orchestrator | 2026-03-10 00:53:08 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:08.642428 | orchestrator | 2026-03-10 00:53:08 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:08.643440 | orchestrator | 2026-03-10 00:53:08 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:08.643535 | orchestrator | 2026-03-10 00:53:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:11.748485 | orchestrator | 2026-03-10 00:53:11 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:11.752028 | orchestrator | 2026-03-10 00:53:11 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:11.754220 | orchestrator | 2026-03-10 00:53:11 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:11.758169 | orchestrator | 2026-03-10 00:53:11 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:11.761830 | orchestrator | 2026-03-10 00:53:11 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:11.761940 | orchestrator | 2026-03-10 00:53:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:14.806957 | orchestrator | 2026-03-10 00:53:14 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:14.807135 | orchestrator | 2026-03-10 00:53:14 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:14.808764 | orchestrator | 2026-03-10 00:53:14 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:14.810304 | orchestrator | 2026-03-10 00:53:14 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:14.812333 | orchestrator | 2026-03-10 00:53:14 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:14.812423 | orchestrator | 2026-03-10 00:53:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:17.860008 | orchestrator | 2026-03-10 00:53:17 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:17.861990 | orchestrator | 2026-03-10 00:53:17 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:17.864172 | orchestrator | 2026-03-10 00:53:17 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:17.866433 | orchestrator | 2026-03-10 00:53:17 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:17.868269 | orchestrator | 2026-03-10 00:53:17 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:17.868758 | orchestrator | 2026-03-10 00:53:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:20.918825 | orchestrator | 2026-03-10 00:53:20 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:20.920968 | orchestrator | 2026-03-10 00:53:20 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:20.923809 | orchestrator | 2026-03-10 00:53:20 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:20.925678 | orchestrator | 2026-03-10 00:53:20 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:20.928215 | orchestrator | 2026-03-10 00:53:20 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:20.928287 | orchestrator | 2026-03-10 00:53:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:23.979892 | orchestrator | 2026-03-10 00:53:23 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:23.982304 | orchestrator | 2026-03-10 00:53:23 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:23.984557 | orchestrator | 2026-03-10 00:53:23 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:23.987184 | orchestrator | 2026-03-10 00:53:23 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:23.989332 | orchestrator | 2026-03-10 00:53:23 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:23.989381 | orchestrator | 2026-03-10 00:53:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:27.045259 | orchestrator | 2026-03-10 00:53:27 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:27.045360 | orchestrator | 2026-03-10 00:53:27 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:27.045778 | orchestrator | 2026-03-10 00:53:27 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:27.048216 | orchestrator | 2026-03-10 00:53:27 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:27.048256 | orchestrator | 2026-03-10 00:53:27 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:27.048289 | orchestrator | 2026-03-10 00:53:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:30.089806 | orchestrator | 2026-03-10 00:53:30 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:30.091561 | orchestrator | 2026-03-10 00:53:30 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:30.094132 | orchestrator | 2026-03-10 00:53:30 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:30.095715 | orchestrator | 2026-03-10 00:53:30 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:30.099300 | orchestrator | 2026-03-10 00:53:30 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:30.099380 | orchestrator | 2026-03-10 00:53:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:33.164086 | orchestrator | 2026-03-10 00:53:33 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:33.164601 | orchestrator | 2026-03-10 00:53:33 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:33.168594 | orchestrator | 2026-03-10 00:53:33 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:33.168664 | orchestrator | 2026-03-10 00:53:33 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:33.168890 | orchestrator | 2026-03-10 00:53:33 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:33.170476 | orchestrator | 2026-03-10 00:53:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:36.216238 | orchestrator | 2026-03-10 00:53:36 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:36.218575 | orchestrator | 2026-03-10 00:53:36 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:36.220638 | orchestrator | 2026-03-10 00:53:36 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:36.222252 | orchestrator | 2026-03-10 00:53:36 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:36.223783 | orchestrator | 2026-03-10 00:53:36 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:36.224045 | orchestrator | 2026-03-10 00:53:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:39.262222 | orchestrator | 2026-03-10 00:53:39 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:39.263014 | orchestrator | 2026-03-10 00:53:39 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:39.263736 | orchestrator | 2026-03-10 00:53:39 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:39.266574 | orchestrator | 2026-03-10 00:53:39 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:39.278387 | orchestrator | 2026-03-10 00:53:39 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:39.278483 | orchestrator | 2026-03-10 00:53:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:42.334408 | orchestrator | 2026-03-10 00:53:42 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:42.336531 | orchestrator | 2026-03-10 00:53:42 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:42.338297 | orchestrator | 2026-03-10 00:53:42 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:42.341209 | orchestrator | 2026-03-10 00:53:42 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:42.342186 | orchestrator | 2026-03-10 00:53:42 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:42.342472 | orchestrator | 2026-03-10 00:53:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:45.379551 | orchestrator | 2026-03-10 00:53:45 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:45.380422 | orchestrator | 2026-03-10 00:53:45 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:45.381999 | orchestrator | 2026-03-10 00:53:45 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:45.394236 | orchestrator | 2026-03-10 00:53:45 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:45.396824 | orchestrator | 2026-03-10 00:53:45 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:45.396998 | orchestrator | 2026-03-10 00:53:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:48.514725 | orchestrator | 2026-03-10 00:53:48 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:48.514897 | orchestrator | 2026-03-10 00:53:48 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:48.514917 | orchestrator | 2026-03-10 00:53:48 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:48.514929 | orchestrator | 2026-03-10 00:53:48 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:48.514940 | orchestrator | 2026-03-10 00:53:48 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:48.514952 | orchestrator | 2026-03-10 00:53:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:51.711965 | orchestrator | 2026-03-10 00:53:51 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:51.714196 | orchestrator | 2026-03-10 00:53:51 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:51.714280 | orchestrator | 2026-03-10 00:53:51 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:51.715058 | orchestrator | 2026-03-10 00:53:51 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:51.716266 | orchestrator | 2026-03-10 00:53:51 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:51.716350 | orchestrator | 2026-03-10 00:53:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:54.838868 | orchestrator | 2026-03-10 00:53:54 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:54.838928 | orchestrator | 2026-03-10 00:53:54 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:54.838937 | orchestrator | 2026-03-10 00:53:54 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:54.838943 | orchestrator | 2026-03-10 00:53:54 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:54.838950 | orchestrator | 2026-03-10 00:53:54 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:54.838956 | orchestrator | 2026-03-10 00:53:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:53:57.897717 | orchestrator | 2026-03-10 00:53:57 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:53:57.898371 | orchestrator | 2026-03-10 00:53:57 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:53:57.899257 | orchestrator | 2026-03-10 00:53:57 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:53:57.900807 | orchestrator | 2026-03-10 00:53:57 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:53:57.901449 | orchestrator | 2026-03-10 00:53:57 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:53:57.901484 | orchestrator | 2026-03-10 00:53:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:00.938318 | orchestrator | 2026-03-10 00:54:00 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:00.939283 | orchestrator | 2026-03-10 00:54:00 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:00.940035 | orchestrator | 2026-03-10 00:54:00 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:00.941089 | orchestrator | 2026-03-10 00:54:00 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:54:00.941877 | orchestrator | 2026-03-10 00:54:00 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:54:00.941925 | orchestrator | 2026-03-10 00:54:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:04.077638 | orchestrator | 2026-03-10 00:54:04 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:04.078245 | orchestrator | 2026-03-10 00:54:04 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:04.078743 | orchestrator | 2026-03-10 00:54:04 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:04.079600 | orchestrator | 2026-03-10 00:54:04 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state STARTED 2026-03-10 00:54:04.080215 | orchestrator | 2026-03-10 00:54:04 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:54:04.080277 | orchestrator | 2026-03-10 00:54:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:07.134353 | orchestrator | 2026-03-10 00:54:07 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:07.135889 | orchestrator | 2026-03-10 00:54:07 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:07.136829 | orchestrator | 2026-03-10 00:54:07 | INFO  | Task e1874a02-1c07-4711-8b3e-bb035b261d42 is in state STARTED 2026-03-10 00:54:07.138490 | orchestrator | 2026-03-10 00:54:07 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:07.141585 | orchestrator | 2026-03-10 00:54:07 | INFO  | Task aa0b43a2-64ac-4f65-a615-41e3e6017668 is in state SUCCESS 2026-03-10 00:54:07.145439 | orchestrator | 2026-03-10 00:54:07.145486 | orchestrator | 2026-03-10 00:54:07.145512 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-10 00:54:07.145517 | orchestrator | 2026-03-10 00:54:07.145522 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-10 00:54:07.145528 | orchestrator | Tuesday 10 March 2026 00:48:54 +0000 (0:00:00.194) 0:00:00.194 ********* 2026-03-10 00:54:07.145532 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:54:07.145538 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:54:07.145543 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:54:07.145548 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.145552 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.145556 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.145561 | orchestrator | 2026-03-10 00:54:07.145566 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-10 00:54:07.145589 | orchestrator | Tuesday 10 March 2026 00:48:55 +0000 (0:00:00.990) 0:00:01.185 ********* 2026-03-10 00:54:07.145593 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.145598 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.145602 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.145605 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.145609 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.145613 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.145617 | orchestrator | 2026-03-10 00:54:07.145621 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-10 00:54:07.145625 | orchestrator | Tuesday 10 March 2026 00:48:56 +0000 (0:00:00.824) 0:00:02.010 ********* 2026-03-10 00:54:07.145629 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.145632 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.145636 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.145640 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.145644 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.145647 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.145651 | orchestrator | 2026-03-10 00:54:07.145655 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-10 00:54:07.145659 | orchestrator | Tuesday 10 March 2026 00:48:57 +0000 (0:00:00.897) 0:00:02.907 ********* 2026-03-10 00:54:07.145663 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.145666 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:54:07.145670 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.145674 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:54:07.145677 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.145681 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:54:07.145685 | orchestrator | 2026-03-10 00:54:07.145689 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-10 00:54:07.145692 | orchestrator | Tuesday 10 March 2026 00:48:59 +0000 (0:00:02.415) 0:00:05.323 ********* 2026-03-10 00:54:07.145705 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:54:07.145709 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:54:07.145713 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.145717 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.145720 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.145724 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:54:07.145728 | orchestrator | 2026-03-10 00:54:07.145731 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-10 00:54:07.145735 | orchestrator | Tuesday 10 March 2026 00:49:01 +0000 (0:00:01.923) 0:00:07.246 ********* 2026-03-10 00:54:07.145739 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:54:07.145743 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:54:07.145760 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:54:07.145764 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.145768 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.145772 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.145775 | orchestrator | 2026-03-10 00:54:07.145779 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-10 00:54:07.145783 | orchestrator | Tuesday 10 March 2026 00:49:03 +0000 (0:00:01.387) 0:00:08.633 ********* 2026-03-10 00:54:07.145787 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.145790 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.145794 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.145838 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.145842 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.145846 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.145849 | orchestrator | 2026-03-10 00:54:07.145853 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-10 00:54:07.145857 | orchestrator | Tuesday 10 March 2026 00:49:03 +0000 (0:00:00.913) 0:00:09.546 ********* 2026-03-10 00:54:07.145861 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.145869 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.145873 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.145877 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.145880 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.145884 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.145888 | orchestrator | 2026-03-10 00:54:07.145892 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-10 00:54:07.145896 | orchestrator | Tuesday 10 March 2026 00:49:04 +0000 (0:00:00.995) 0:00:10.541 ********* 2026-03-10 00:54:07.145900 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 00:54:07.145904 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 00:54:07.145907 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.145911 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 00:54:07.145915 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 00:54:07.145919 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.145922 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 00:54:07.145926 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 00:54:07.145930 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.145934 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 00:54:07.145948 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 00:54:07.145952 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.145956 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 00:54:07.145960 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 00:54:07.145963 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.145967 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 00:54:07.145971 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 00:54:07.145975 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.145978 | orchestrator | 2026-03-10 00:54:07.145982 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-10 00:54:07.145986 | orchestrator | Tuesday 10 March 2026 00:49:05 +0000 (0:00:00.856) 0:00:11.398 ********* 2026-03-10 00:54:07.145990 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.145994 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.145997 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.146001 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.146005 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146009 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146012 | orchestrator | 2026-03-10 00:54:07.146049 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-10 00:54:07.146054 | orchestrator | Tuesday 10 March 2026 00:49:07 +0000 (0:00:02.162) 0:00:13.561 ********* 2026-03-10 00:54:07.146058 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:54:07.146062 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:54:07.146066 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:54:07.146070 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.146074 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.146078 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.146081 | orchestrator | 2026-03-10 00:54:07.146085 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-10 00:54:07.146089 | orchestrator | Tuesday 10 March 2026 00:49:09 +0000 (0:00:01.948) 0:00:15.510 ********* 2026-03-10 00:54:07.146093 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:54:07.146096 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:54:07.146100 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.146108 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:54:07.146112 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.146116 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.146119 | orchestrator | 2026-03-10 00:54:07.146123 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-10 00:54:07.146130 | orchestrator | Tuesday 10 March 2026 00:49:15 +0000 (0:00:05.782) 0:00:21.292 ********* 2026-03-10 00:54:07.146134 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.146138 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.146142 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.146145 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.146149 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146153 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146157 | orchestrator | 2026-03-10 00:54:07.146160 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-10 00:54:07.146164 | orchestrator | Tuesday 10 March 2026 00:49:17 +0000 (0:00:01.974) 0:00:23.267 ********* 2026-03-10 00:54:07.146168 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.146172 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.146175 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.146179 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.146183 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146186 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146190 | orchestrator | 2026-03-10 00:54:07.146194 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-10 00:54:07.146199 | orchestrator | Tuesday 10 March 2026 00:49:19 +0000 (0:00:01.789) 0:00:25.056 ********* 2026-03-10 00:54:07.146203 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.146207 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.146210 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.146214 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.146218 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146222 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146225 | orchestrator | 2026-03-10 00:54:07.146229 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-10 00:54:07.146233 | orchestrator | Tuesday 10 March 2026 00:49:20 +0000 (0:00:01.265) 0:00:26.321 ********* 2026-03-10 00:54:07.146237 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-10 00:54:07.146241 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-10 00:54:07.146245 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-10 00:54:07.146249 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-10 00:54:07.146252 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.146256 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-10 00:54:07.146260 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-10 00:54:07.146264 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.146268 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-10 00:54:07.146271 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-10 00:54:07.146275 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.146279 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.146283 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-10 00:54:07.146286 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-10 00:54:07.146290 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146294 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-10 00:54:07.146297 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-10 00:54:07.146301 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146305 | orchestrator | 2026-03-10 00:54:07.146309 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-10 00:54:07.146319 | orchestrator | Tuesday 10 March 2026 00:49:22 +0000 (0:00:01.396) 0:00:27.717 ********* 2026-03-10 00:54:07.146323 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.146327 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.146331 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.146334 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.146338 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146342 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146346 | orchestrator | 2026-03-10 00:54:07.146350 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-10 00:54:07.146353 | orchestrator | Tuesday 10 March 2026 00:49:23 +0000 (0:00:01.426) 0:00:29.143 ********* 2026-03-10 00:54:07.146357 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.146361 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.146365 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.146368 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.146372 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146376 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146380 | orchestrator | 2026-03-10 00:54:07.146383 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-10 00:54:07.146387 | orchestrator | 2026-03-10 00:54:07.146391 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-10 00:54:07.146395 | orchestrator | Tuesday 10 March 2026 00:49:25 +0000 (0:00:02.047) 0:00:31.191 ********* 2026-03-10 00:54:07.146399 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.146402 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.146406 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.146410 | orchestrator | 2026-03-10 00:54:07.146414 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-10 00:54:07.146417 | orchestrator | Tuesday 10 March 2026 00:49:30 +0000 (0:00:04.980) 0:00:36.172 ********* 2026-03-10 00:54:07.146421 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.146425 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.146429 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.146432 | orchestrator | 2026-03-10 00:54:07.146436 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-10 00:54:07.146440 | orchestrator | Tuesday 10 March 2026 00:49:33 +0000 (0:00:02.652) 0:00:38.824 ********* 2026-03-10 00:54:07.146444 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.146448 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.146451 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.146455 | orchestrator | 2026-03-10 00:54:07.146459 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-10 00:54:07.146463 | orchestrator | Tuesday 10 March 2026 00:49:34 +0000 (0:00:01.381) 0:00:40.206 ********* 2026-03-10 00:54:07.146466 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.146470 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.146477 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.146481 | orchestrator | 2026-03-10 00:54:07.146485 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-10 00:54:07.146488 | orchestrator | Tuesday 10 March 2026 00:49:36 +0000 (0:00:01.409) 0:00:41.616 ********* 2026-03-10 00:54:07.146492 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.146496 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146500 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146504 | orchestrator | 2026-03-10 00:54:07.146507 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-10 00:54:07.146511 | orchestrator | Tuesday 10 March 2026 00:49:36 +0000 (0:00:00.708) 0:00:42.324 ********* 2026-03-10 00:54:07.146515 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.146519 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.146522 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.146526 | orchestrator | 2026-03-10 00:54:07.146530 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-10 00:54:07.146537 | orchestrator | Tuesday 10 March 2026 00:49:39 +0000 (0:00:03.147) 0:00:45.472 ********* 2026-03-10 00:54:07.146541 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.146545 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.146549 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.146552 | orchestrator | 2026-03-10 00:54:07.146556 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-10 00:54:07.146560 | orchestrator | Tuesday 10 March 2026 00:49:41 +0000 (0:00:01.870) 0:00:47.342 ********* 2026-03-10 00:54:07.146564 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:54:07.146567 | orchestrator | 2026-03-10 00:54:07.146571 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-10 00:54:07.146575 | orchestrator | Tuesday 10 March 2026 00:49:42 +0000 (0:00:00.958) 0:00:48.301 ********* 2026-03-10 00:54:07.146579 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.146582 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.146586 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.146590 | orchestrator | 2026-03-10 00:54:07.146594 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-10 00:54:07.146598 | orchestrator | Tuesday 10 March 2026 00:49:46 +0000 (0:00:04.232) 0:00:52.533 ********* 2026-03-10 00:54:07.146602 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146605 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146609 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.146613 | orchestrator | 2026-03-10 00:54:07.146616 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-10 00:54:07.146620 | orchestrator | Tuesday 10 March 2026 00:49:47 +0000 (0:00:00.835) 0:00:53.369 ********* 2026-03-10 00:54:07.146624 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146628 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146631 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.146635 | orchestrator | 2026-03-10 00:54:07.146639 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-10 00:54:07.146643 | orchestrator | Tuesday 10 March 2026 00:49:48 +0000 (0:00:01.121) 0:00:54.491 ********* 2026-03-10 00:54:07.146647 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146650 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146654 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.146658 | orchestrator | 2026-03-10 00:54:07.146661 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-10 00:54:07.146667 | orchestrator | Tuesday 10 March 2026 00:49:50 +0000 (0:00:01.955) 0:00:56.447 ********* 2026-03-10 00:54:07.146671 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.146675 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146679 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146682 | orchestrator | 2026-03-10 00:54:07.146686 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-10 00:54:07.146690 | orchestrator | Tuesday 10 March 2026 00:49:51 +0000 (0:00:01.034) 0:00:57.481 ********* 2026-03-10 00:54:07.146694 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.146697 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146701 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146705 | orchestrator | 2026-03-10 00:54:07.146709 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-10 00:54:07.146712 | orchestrator | Tuesday 10 March 2026 00:49:52 +0000 (0:00:00.363) 0:00:57.845 ********* 2026-03-10 00:54:07.146716 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.146720 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.146724 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.146727 | orchestrator | 2026-03-10 00:54:07.146731 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-10 00:54:07.146735 | orchestrator | Tuesday 10 March 2026 00:49:53 +0000 (0:00:01.641) 0:00:59.487 ********* 2026-03-10 00:54:07.146739 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.146746 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.146750 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.146754 | orchestrator | 2026-03-10 00:54:07.146758 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-10 00:54:07.146762 | orchestrator | Tuesday 10 March 2026 00:49:56 +0000 (0:00:02.696) 0:01:02.184 ********* 2026-03-10 00:54:07.146765 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.146769 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.146773 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.146777 | orchestrator | 2026-03-10 00:54:07.146780 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-10 00:54:07.146784 | orchestrator | Tuesday 10 March 2026 00:49:57 +0000 (0:00:01.381) 0:01:03.565 ********* 2026-03-10 00:54:07.146788 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-10 00:54:07.146793 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-10 00:54:07.146818 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-10 00:54:07.146825 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-10 00:54:07.146833 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-10 00:54:07.146837 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-10 00:54:07.146841 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-10 00:54:07.146845 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-10 00:54:07.146848 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-10 00:54:07.146852 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-10 00:54:07.146856 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-10 00:54:07.146859 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-10 00:54:07.146863 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.146867 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.146871 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.146874 | orchestrator | 2026-03-10 00:54:07.146878 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-10 00:54:07.146882 | orchestrator | Tuesday 10 March 2026 00:50:41 +0000 (0:00:43.863) 0:01:47.429 ********* 2026-03-10 00:54:07.146886 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.146890 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.146893 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.146897 | orchestrator | 2026-03-10 00:54:07.146901 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-10 00:54:07.146905 | orchestrator | Tuesday 10 March 2026 00:50:42 +0000 (0:00:00.751) 0:01:48.181 ********* 2026-03-10 00:54:07.146908 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.146912 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.146916 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.146920 | orchestrator | 2026-03-10 00:54:07.146923 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-10 00:54:07.146931 | orchestrator | Tuesday 10 March 2026 00:50:44 +0000 (0:00:02.286) 0:01:50.468 ********* 2026-03-10 00:54:07.146935 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.146939 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.146942 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.146946 | orchestrator | 2026-03-10 00:54:07.146953 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-10 00:54:07.146957 | orchestrator | Tuesday 10 March 2026 00:50:46 +0000 (0:00:02.112) 0:01:52.580 ********* 2026-03-10 00:54:07.146961 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.146964 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.146968 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.146972 | orchestrator | 2026-03-10 00:54:07.146978 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-10 00:54:07.146984 | orchestrator | Tuesday 10 March 2026 00:51:11 +0000 (0:00:24.780) 0:02:17.361 ********* 2026-03-10 00:54:07.146990 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.146997 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.147005 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.147014 | orchestrator | 2026-03-10 00:54:07.147020 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-10 00:54:07.147026 | orchestrator | Tuesday 10 March 2026 00:51:12 +0000 (0:00:00.761) 0:02:18.123 ********* 2026-03-10 00:54:07.147031 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.147037 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.147044 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.147050 | orchestrator | 2026-03-10 00:54:07.147055 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-10 00:54:07.147062 | orchestrator | Tuesday 10 March 2026 00:51:13 +0000 (0:00:00.612) 0:02:18.735 ********* 2026-03-10 00:54:07.147067 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.147072 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.147077 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.147085 | orchestrator | 2026-03-10 00:54:07.147091 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-10 00:54:07.147096 | orchestrator | Tuesday 10 March 2026 00:51:13 +0000 (0:00:00.593) 0:02:19.329 ********* 2026-03-10 00:54:07.147101 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.147107 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.147112 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.147119 | orchestrator | 2026-03-10 00:54:07.147124 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-10 00:54:07.147130 | orchestrator | Tuesday 10 March 2026 00:51:14 +0000 (0:00:01.065) 0:02:20.394 ********* 2026-03-10 00:54:07.147137 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.147143 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.147150 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.147156 | orchestrator | 2026-03-10 00:54:07.147163 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-10 00:54:07.147170 | orchestrator | Tuesday 10 March 2026 00:51:15 +0000 (0:00:00.384) 0:02:20.778 ********* 2026-03-10 00:54:07.147176 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.147187 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.147194 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.147200 | orchestrator | 2026-03-10 00:54:07.147204 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-10 00:54:07.147208 | orchestrator | Tuesday 10 March 2026 00:51:15 +0000 (0:00:00.695) 0:02:21.474 ********* 2026-03-10 00:54:07.147212 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.147216 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.147220 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.147223 | orchestrator | 2026-03-10 00:54:07.147227 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-10 00:54:07.147231 | orchestrator | Tuesday 10 March 2026 00:51:16 +0000 (0:00:00.974) 0:02:22.449 ********* 2026-03-10 00:54:07.147240 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.147244 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.147247 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.147251 | orchestrator | 2026-03-10 00:54:07.147255 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-10 00:54:07.147258 | orchestrator | Tuesday 10 March 2026 00:51:18 +0000 (0:00:01.486) 0:02:23.935 ********* 2026-03-10 00:54:07.147262 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:07.147266 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:07.147270 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:07.147274 | orchestrator | 2026-03-10 00:54:07.147277 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-10 00:54:07.147281 | orchestrator | Tuesday 10 March 2026 00:51:19 +0000 (0:00:01.104) 0:02:25.040 ********* 2026-03-10 00:54:07.147285 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.147289 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.147292 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.147296 | orchestrator | 2026-03-10 00:54:07.147300 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-10 00:54:07.147303 | orchestrator | Tuesday 10 March 2026 00:51:19 +0000 (0:00:00.363) 0:02:25.403 ********* 2026-03-10 00:54:07.147307 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.147311 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.147314 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.147318 | orchestrator | 2026-03-10 00:54:07.147322 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-10 00:54:07.147326 | orchestrator | Tuesday 10 March 2026 00:51:20 +0000 (0:00:00.469) 0:02:25.872 ********* 2026-03-10 00:54:07.147329 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.147333 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.147337 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.147341 | orchestrator | 2026-03-10 00:54:07.147344 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-10 00:54:07.147348 | orchestrator | Tuesday 10 March 2026 00:51:21 +0000 (0:00:01.345) 0:02:27.218 ********* 2026-03-10 00:54:07.147352 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.147356 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.147360 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.147366 | orchestrator | 2026-03-10 00:54:07.147372 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-10 00:54:07.147378 | orchestrator | Tuesday 10 March 2026 00:51:22 +0000 (0:00:00.725) 0:02:27.944 ********* 2026-03-10 00:54:07.147384 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-10 00:54:07.147395 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-10 00:54:07.147401 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-10 00:54:07.147406 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-10 00:54:07.147412 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-10 00:54:07.147417 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-10 00:54:07.147423 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-10 00:54:07.147429 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-10 00:54:07.147435 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-10 00:54:07.147441 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-10 00:54:07.147447 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-10 00:54:07.147459 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-10 00:54:07.147465 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-10 00:54:07.147471 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-10 00:54:07.147478 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-10 00:54:07.147483 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-10 00:54:07.147487 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-10 00:54:07.147494 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-10 00:54:07.147500 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-10 00:54:07.147514 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-10 00:54:07.147520 | orchestrator | 2026-03-10 00:54:07.147526 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-10 00:54:07.147532 | orchestrator | 2026-03-10 00:54:07.147538 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-10 00:54:07.147544 | orchestrator | Tuesday 10 March 2026 00:51:25 +0000 (0:00:03.131) 0:02:31.076 ********* 2026-03-10 00:54:07.147550 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:54:07.147555 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:54:07.147561 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:54:07.147567 | orchestrator | 2026-03-10 00:54:07.147573 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-10 00:54:07.147580 | orchestrator | Tuesday 10 March 2026 00:51:26 +0000 (0:00:00.635) 0:02:31.712 ********* 2026-03-10 00:54:07.147586 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:54:07.147593 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:54:07.147599 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:54:07.147605 | orchestrator | 2026-03-10 00:54:07.147611 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-10 00:54:07.147617 | orchestrator | Tuesday 10 March 2026 00:51:26 +0000 (0:00:00.717) 0:02:32.429 ********* 2026-03-10 00:54:07.147623 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:54:07.147629 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:54:07.147635 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:54:07.147641 | orchestrator | 2026-03-10 00:54:07.147648 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-10 00:54:07.147654 | orchestrator | Tuesday 10 March 2026 00:51:27 +0000 (0:00:00.400) 0:02:32.829 ********* 2026-03-10 00:54:07.147660 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 00:54:07.147667 | orchestrator | 2026-03-10 00:54:07.147673 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-10 00:54:07.147679 | orchestrator | Tuesday 10 March 2026 00:51:28 +0000 (0:00:00.797) 0:02:33.626 ********* 2026-03-10 00:54:07.147686 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.147692 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.147698 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.147704 | orchestrator | 2026-03-10 00:54:07.147710 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-10 00:54:07.147716 | orchestrator | Tuesday 10 March 2026 00:51:28 +0000 (0:00:00.370) 0:02:33.997 ********* 2026-03-10 00:54:07.147722 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.147728 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.147735 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.147741 | orchestrator | 2026-03-10 00:54:07.147748 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-10 00:54:07.147759 | orchestrator | Tuesday 10 March 2026 00:51:28 +0000 (0:00:00.360) 0:02:34.357 ********* 2026-03-10 00:54:07.147765 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.147772 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.147778 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.147784 | orchestrator | 2026-03-10 00:54:07.147790 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-10 00:54:07.147813 | orchestrator | Tuesday 10 March 2026 00:51:29 +0000 (0:00:00.399) 0:02:34.757 ********* 2026-03-10 00:54:07.147819 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:54:07.147825 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:54:07.147873 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:54:07.147881 | orchestrator | 2026-03-10 00:54:07.147893 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-10 00:54:07.147900 | orchestrator | Tuesday 10 March 2026 00:51:30 +0000 (0:00:01.083) 0:02:35.840 ********* 2026-03-10 00:54:07.147906 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:54:07.147914 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:54:07.147918 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:54:07.147922 | orchestrator | 2026-03-10 00:54:07.147925 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-10 00:54:07.147929 | orchestrator | Tuesday 10 March 2026 00:51:31 +0000 (0:00:01.459) 0:02:37.300 ********* 2026-03-10 00:54:07.147933 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:54:07.147937 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:54:07.147940 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:54:07.147944 | orchestrator | 2026-03-10 00:54:07.147948 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-10 00:54:07.147952 | orchestrator | Tuesday 10 March 2026 00:51:33 +0000 (0:00:01.489) 0:02:38.789 ********* 2026-03-10 00:54:07.147955 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:54:07.147959 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:54:07.147962 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:54:07.147966 | orchestrator | 2026-03-10 00:54:07.147970 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-10 00:54:07.147974 | orchestrator | 2026-03-10 00:54:07.147977 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-10 00:54:07.147981 | orchestrator | Tuesday 10 March 2026 00:51:44 +0000 (0:00:11.254) 0:02:50.044 ********* 2026-03-10 00:54:07.147985 | orchestrator | ok: [testbed-manager] 2026-03-10 00:54:07.147988 | orchestrator | 2026-03-10 00:54:07.147992 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-10 00:54:07.147996 | orchestrator | Tuesday 10 March 2026 00:51:45 +0000 (0:00:00.967) 0:02:51.012 ********* 2026-03-10 00:54:07.148000 | orchestrator | changed: [testbed-manager] 2026-03-10 00:54:07.148004 | orchestrator | 2026-03-10 00:54:07.148007 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-10 00:54:07.148011 | orchestrator | Tuesday 10 March 2026 00:51:46 +0000 (0:00:00.615) 0:02:51.627 ********* 2026-03-10 00:54:07.148015 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-10 00:54:07.148018 | orchestrator | 2026-03-10 00:54:07.148022 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-10 00:54:07.148026 | orchestrator | Tuesday 10 March 2026 00:51:46 +0000 (0:00:00.701) 0:02:52.328 ********* 2026-03-10 00:54:07.148029 | orchestrator | changed: [testbed-manager] 2026-03-10 00:54:07.148033 | orchestrator | 2026-03-10 00:54:07.148042 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-10 00:54:07.148046 | orchestrator | Tuesday 10 March 2026 00:51:47 +0000 (0:00:01.131) 0:02:53.460 ********* 2026-03-10 00:54:07.148050 | orchestrator | changed: [testbed-manager] 2026-03-10 00:54:07.148053 | orchestrator | 2026-03-10 00:54:07.148057 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-10 00:54:07.148061 | orchestrator | Tuesday 10 March 2026 00:51:48 +0000 (0:00:00.785) 0:02:54.246 ********* 2026-03-10 00:54:07.148069 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-10 00:54:07.148073 | orchestrator | 2026-03-10 00:54:07.148077 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-10 00:54:07.148081 | orchestrator | Tuesday 10 March 2026 00:51:50 +0000 (0:00:02.087) 0:02:56.333 ********* 2026-03-10 00:54:07.148084 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-10 00:54:07.148088 | orchestrator | 2026-03-10 00:54:07.148092 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-10 00:54:07.148095 | orchestrator | Tuesday 10 March 2026 00:51:51 +0000 (0:00:01.129) 0:02:57.463 ********* 2026-03-10 00:54:07.148099 | orchestrator | changed: [testbed-manager] 2026-03-10 00:54:07.148103 | orchestrator | 2026-03-10 00:54:07.148106 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-10 00:54:07.148110 | orchestrator | Tuesday 10 March 2026 00:51:52 +0000 (0:00:00.809) 0:02:58.273 ********* 2026-03-10 00:54:07.148114 | orchestrator | changed: [testbed-manager] 2026-03-10 00:54:07.148117 | orchestrator | 2026-03-10 00:54:07.148121 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-10 00:54:07.148125 | orchestrator | 2026-03-10 00:54:07.148128 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-10 00:54:07.148132 | orchestrator | Tuesday 10 March 2026 00:51:53 +0000 (0:00:00.610) 0:02:58.883 ********* 2026-03-10 00:54:07.148136 | orchestrator | ok: [testbed-manager] 2026-03-10 00:54:07.148139 | orchestrator | 2026-03-10 00:54:07.148143 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-10 00:54:07.148147 | orchestrator | Tuesday 10 March 2026 00:51:53 +0000 (0:00:00.190) 0:02:59.073 ********* 2026-03-10 00:54:07.148153 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-10 00:54:07.148159 | orchestrator | 2026-03-10 00:54:07.148165 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-10 00:54:07.148171 | orchestrator | Tuesday 10 March 2026 00:51:53 +0000 (0:00:00.332) 0:02:59.406 ********* 2026-03-10 00:54:07.148177 | orchestrator | ok: [testbed-manager] 2026-03-10 00:54:07.148183 | orchestrator | 2026-03-10 00:54:07.148189 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-10 00:54:07.148195 | orchestrator | Tuesday 10 March 2026 00:51:55 +0000 (0:00:01.315) 0:03:00.721 ********* 2026-03-10 00:54:07.148201 | orchestrator | ok: [testbed-manager] 2026-03-10 00:54:07.148208 | orchestrator | 2026-03-10 00:54:07.148213 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-10 00:54:07.148219 | orchestrator | Tuesday 10 March 2026 00:51:57 +0000 (0:00:02.558) 0:03:03.281 ********* 2026-03-10 00:54:07.148225 | orchestrator | changed: [testbed-manager] 2026-03-10 00:54:07.148231 | orchestrator | 2026-03-10 00:54:07.148237 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-10 00:54:07.148241 | orchestrator | Tuesday 10 March 2026 00:51:58 +0000 (0:00:01.011) 0:03:04.293 ********* 2026-03-10 00:54:07.148245 | orchestrator | ok: [testbed-manager] 2026-03-10 00:54:07.148249 | orchestrator | 2026-03-10 00:54:07.148258 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-10 00:54:07.148264 | orchestrator | Tuesday 10 March 2026 00:51:59 +0000 (0:00:00.706) 0:03:04.999 ********* 2026-03-10 00:54:07.148270 | orchestrator | changed: [testbed-manager] 2026-03-10 00:54:07.148276 | orchestrator | 2026-03-10 00:54:07.148282 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-10 00:54:07.148288 | orchestrator | Tuesday 10 March 2026 00:52:09 +0000 (0:00:10.192) 0:03:15.192 ********* 2026-03-10 00:54:07.148294 | orchestrator | changed: [testbed-manager] 2026-03-10 00:54:07.148301 | orchestrator | 2026-03-10 00:54:07.148307 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-10 00:54:07.148313 | orchestrator | Tuesday 10 March 2026 00:52:27 +0000 (0:00:17.842) 0:03:33.035 ********* 2026-03-10 00:54:07.148319 | orchestrator | ok: [testbed-manager] 2026-03-10 00:54:07.148333 | orchestrator | 2026-03-10 00:54:07.148337 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-10 00:54:07.148341 | orchestrator | 2026-03-10 00:54:07.148345 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-10 00:54:07.148349 | orchestrator | Tuesday 10 March 2026 00:52:28 +0000 (0:00:00.623) 0:03:33.658 ********* 2026-03-10 00:54:07.148353 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.148360 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.148365 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.148371 | orchestrator | 2026-03-10 00:54:07.148377 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-10 00:54:07.148383 | orchestrator | Tuesday 10 March 2026 00:52:28 +0000 (0:00:00.408) 0:03:34.066 ********* 2026-03-10 00:54:07.148390 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.148395 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.148402 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.148407 | orchestrator | 2026-03-10 00:54:07.148413 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-10 00:54:07.148420 | orchestrator | Tuesday 10 March 2026 00:52:28 +0000 (0:00:00.407) 0:03:34.474 ********* 2026-03-10 00:54:07.148426 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:54:07.148433 | orchestrator | 2026-03-10 00:54:07.148439 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-10 00:54:07.148445 | orchestrator | Tuesday 10 March 2026 00:52:29 +0000 (0:00:00.921) 0:03:35.396 ********* 2026-03-10 00:54:07.148451 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-10 00:54:07.148457 | orchestrator | 2026-03-10 00:54:07.148464 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-10 00:54:07.148470 | orchestrator | Tuesday 10 March 2026 00:52:31 +0000 (0:00:01.271) 0:03:36.667 ********* 2026-03-10 00:54:07.148477 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 00:54:07.148483 | orchestrator | 2026-03-10 00:54:07.148489 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-10 00:54:07.148495 | orchestrator | Tuesday 10 March 2026 00:52:32 +0000 (0:00:01.044) 0:03:37.712 ********* 2026-03-10 00:54:07.148501 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.148508 | orchestrator | 2026-03-10 00:54:07.148514 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-10 00:54:07.148520 | orchestrator | Tuesday 10 March 2026 00:52:32 +0000 (0:00:00.141) 0:03:37.853 ********* 2026-03-10 00:54:07.148526 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 00:54:07.148533 | orchestrator | 2026-03-10 00:54:07.148539 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-10 00:54:07.148545 | orchestrator | Tuesday 10 March 2026 00:52:33 +0000 (0:00:01.178) 0:03:39.032 ********* 2026-03-10 00:54:07.148552 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.148558 | orchestrator | 2026-03-10 00:54:07.148564 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-10 00:54:07.149130 | orchestrator | Tuesday 10 March 2026 00:52:33 +0000 (0:00:00.167) 0:03:39.200 ********* 2026-03-10 00:54:07.149160 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.149167 | orchestrator | 2026-03-10 00:54:07.149174 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-10 00:54:07.149181 | orchestrator | Tuesday 10 March 2026 00:52:33 +0000 (0:00:00.132) 0:03:39.333 ********* 2026-03-10 00:54:07.149187 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.149193 | orchestrator | 2026-03-10 00:54:07.149200 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-10 00:54:07.149206 | orchestrator | Tuesday 10 March 2026 00:52:33 +0000 (0:00:00.147) 0:03:39.481 ********* 2026-03-10 00:54:07.149212 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.149218 | orchestrator | 2026-03-10 00:54:07.149224 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-10 00:54:07.149243 | orchestrator | Tuesday 10 March 2026 00:52:34 +0000 (0:00:00.141) 0:03:39.622 ********* 2026-03-10 00:54:07.149250 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-10 00:54:07.149255 | orchestrator | 2026-03-10 00:54:07.149262 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-10 00:54:07.149268 | orchestrator | Tuesday 10 March 2026 00:52:41 +0000 (0:00:07.146) 0:03:46.768 ********* 2026-03-10 00:54:07.149274 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-10 00:54:07.149280 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-10 00:54:07.149288 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-10 00:54:07.149292 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-10 00:54:07.149296 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-10 00:54:07.149300 | orchestrator | 2026-03-10 00:54:07.149303 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-10 00:54:07.149307 | orchestrator | Tuesday 10 March 2026 00:53:29 +0000 (0:00:48.616) 0:04:35.384 ********* 2026-03-10 00:54:07.149320 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 00:54:07.149324 | orchestrator | 2026-03-10 00:54:07.149327 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-10 00:54:07.149331 | orchestrator | Tuesday 10 March 2026 00:53:31 +0000 (0:00:01.356) 0:04:36.741 ********* 2026-03-10 00:54:07.149335 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-10 00:54:07.149339 | orchestrator | 2026-03-10 00:54:07.149342 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-10 00:54:07.149346 | orchestrator | Tuesday 10 March 2026 00:53:33 +0000 (0:00:02.483) 0:04:39.225 ********* 2026-03-10 00:54:07.149350 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-10 00:54:07.149353 | orchestrator | 2026-03-10 00:54:07.149357 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-10 00:54:07.149361 | orchestrator | Tuesday 10 March 2026 00:53:34 +0000 (0:00:01.272) 0:04:40.498 ********* 2026-03-10 00:54:07.149365 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.149368 | orchestrator | 2026-03-10 00:54:07.149372 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-10 00:54:07.149376 | orchestrator | Tuesday 10 March 2026 00:53:35 +0000 (0:00:00.122) 0:04:40.620 ********* 2026-03-10 00:54:07.149380 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-10 00:54:07.149384 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-10 00:54:07.149387 | orchestrator | 2026-03-10 00:54:07.149391 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-10 00:54:07.149395 | orchestrator | Tuesday 10 March 2026 00:53:37 +0000 (0:00:02.144) 0:04:42.765 ********* 2026-03-10 00:54:07.149398 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.149402 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.149406 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.149410 | orchestrator | 2026-03-10 00:54:07.149414 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-10 00:54:07.149417 | orchestrator | Tuesday 10 March 2026 00:53:37 +0000 (0:00:00.410) 0:04:43.175 ********* 2026-03-10 00:54:07.149421 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.149425 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.149429 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.149432 | orchestrator | 2026-03-10 00:54:07.149436 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-10 00:54:07.149440 | orchestrator | 2026-03-10 00:54:07.149444 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-10 00:54:07.149447 | orchestrator | Tuesday 10 March 2026 00:53:38 +0000 (0:00:01.413) 0:04:44.589 ********* 2026-03-10 00:54:07.149455 | orchestrator | ok: [testbed-manager] 2026-03-10 00:54:07.149459 | orchestrator | 2026-03-10 00:54:07.149463 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-10 00:54:07.149467 | orchestrator | Tuesday 10 March 2026 00:53:39 +0000 (0:00:00.170) 0:04:44.760 ********* 2026-03-10 00:54:07.149470 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-10 00:54:07.149474 | orchestrator | 2026-03-10 00:54:07.149478 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-10 00:54:07.149482 | orchestrator | Tuesday 10 March 2026 00:53:39 +0000 (0:00:00.340) 0:04:45.100 ********* 2026-03-10 00:54:07.149485 | orchestrator | changed: [testbed-manager] 2026-03-10 00:54:07.149489 | orchestrator | 2026-03-10 00:54:07.149493 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-10 00:54:07.149497 | orchestrator | 2026-03-10 00:54:07.149500 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-10 00:54:07.149504 | orchestrator | Tuesday 10 March 2026 00:53:46 +0000 (0:00:06.629) 0:04:51.729 ********* 2026-03-10 00:54:07.149508 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:54:07.149512 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:54:07.149515 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:54:07.149519 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:07.149523 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:07.149527 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:07.149530 | orchestrator | 2026-03-10 00:54:07.149534 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-10 00:54:07.149538 | orchestrator | Tuesday 10 March 2026 00:53:47 +0000 (0:00:01.346) 0:04:53.076 ********* 2026-03-10 00:54:07.149542 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-10 00:54:07.149545 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-10 00:54:07.149549 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-10 00:54:07.149555 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-10 00:54:07.149559 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-10 00:54:07.149563 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-10 00:54:07.149566 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-10 00:54:07.149571 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-10 00:54:07.149574 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-10 00:54:07.149578 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-10 00:54:07.149582 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-10 00:54:07.149585 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-10 00:54:07.149593 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-10 00:54:07.149597 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-10 00:54:07.149601 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-10 00:54:07.149605 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-10 00:54:07.149608 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-10 00:54:07.149612 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-10 00:54:07.149650 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-10 00:54:07.149661 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-10 00:54:07.149664 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-10 00:54:07.149668 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-10 00:54:07.149672 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-10 00:54:07.149676 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-10 00:54:07.149680 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-10 00:54:07.149684 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-10 00:54:07.149688 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-10 00:54:07.149691 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-10 00:54:07.149695 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-10 00:54:07.149699 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-10 00:54:07.149703 | orchestrator | 2026-03-10 00:54:07.149706 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-10 00:54:07.149710 | orchestrator | Tuesday 10 March 2026 00:54:03 +0000 (0:00:15.850) 0:05:08.926 ********* 2026-03-10 00:54:07.149714 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.149718 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.149721 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.149725 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.149729 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.149733 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.149736 | orchestrator | 2026-03-10 00:54:07.149740 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-10 00:54:07.149744 | orchestrator | Tuesday 10 March 2026 00:54:04 +0000 (0:00:00.842) 0:05:09.769 ********* 2026-03-10 00:54:07.149748 | orchestrator | skipping: [testbed-node-3] 2026-03-10 00:54:07.149752 | orchestrator | skipping: [testbed-node-4] 2026-03-10 00:54:07.149756 | orchestrator | skipping: [testbed-node-5] 2026-03-10 00:54:07.149759 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:07.149763 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:07.149767 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:07.149771 | orchestrator | 2026-03-10 00:54:07.149774 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:54:07.149778 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:54:07.149784 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-10 00:54:07.149789 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-10 00:54:07.149793 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-10 00:54:07.149837 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-10 00:54:07.149845 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-10 00:54:07.149849 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-10 00:54:07.149858 | orchestrator | 2026-03-10 00:54:07.149862 | orchestrator | 2026-03-10 00:54:07.149866 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:54:07.149870 | orchestrator | Tuesday 10 March 2026 00:54:04 +0000 (0:00:00.550) 0:05:10.319 ********* 2026-03-10 00:54:07.149873 | orchestrator | =============================================================================== 2026-03-10 00:54:07.149877 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 48.62s 2026-03-10 00:54:07.149881 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.86s 2026-03-10 00:54:07.149885 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.78s 2026-03-10 00:54:07.149891 | orchestrator | kubectl : Install required packages ------------------------------------ 17.84s 2026-03-10 00:54:07.149895 | orchestrator | Manage labels ---------------------------------------------------------- 15.85s 2026-03-10 00:54:07.149899 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 11.25s 2026-03-10 00:54:07.149903 | orchestrator | kubectl : Add repository Debian ---------------------------------------- 10.19s 2026-03-10 00:54:07.149907 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 7.15s 2026-03-10 00:54:07.149910 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 6.63s 2026-03-10 00:54:07.149914 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.78s 2026-03-10 00:54:07.149918 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 4.98s 2026-03-10 00:54:07.149922 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 4.23s 2026-03-10 00:54:07.149925 | orchestrator | k3s_server : Create /etc/rancher/k3s directory -------------------------- 3.15s 2026-03-10 00:54:07.149929 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.13s 2026-03-10 00:54:07.149933 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.70s 2026-03-10 00:54:07.149937 | orchestrator | k3s_server : Stop k3s-init ---------------------------------------------- 2.65s 2026-03-10 00:54:07.149940 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 2.56s 2026-03-10 00:54:07.149944 | orchestrator | k3s_server_post : Copy BGP manifests to first master -------------------- 2.48s 2026-03-10 00:54:07.149948 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.42s 2026-03-10 00:54:07.149952 | orchestrator | k3s_server : Kill the temporary service used for initialization --------- 2.29s 2026-03-10 00:54:07.150135 | orchestrator | 2026-03-10 00:54:07 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:54:07.150150 | orchestrator | 2026-03-10 00:54:07 | INFO  | Task 15a2f4b9-e2dc-49ec-aa23-89d04373cd82 is in state STARTED 2026-03-10 00:54:07.150182 | orchestrator | 2026-03-10 00:54:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:10.298354 | orchestrator | 2026-03-10 00:54:10 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:10.299554 | orchestrator | 2026-03-10 00:54:10 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:10.300724 | orchestrator | 2026-03-10 00:54:10 | INFO  | Task e1874a02-1c07-4711-8b3e-bb035b261d42 is in state STARTED 2026-03-10 00:54:10.302143 | orchestrator | 2026-03-10 00:54:10 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:10.303009 | orchestrator | 2026-03-10 00:54:10 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:54:10.304489 | orchestrator | 2026-03-10 00:54:10 | INFO  | Task 15a2f4b9-e2dc-49ec-aa23-89d04373cd82 is in state STARTED 2026-03-10 00:54:10.304607 | orchestrator | 2026-03-10 00:54:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:13.367753 | orchestrator | 2026-03-10 00:54:13 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:13.367947 | orchestrator | 2026-03-10 00:54:13 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:13.367960 | orchestrator | 2026-03-10 00:54:13 | INFO  | Task e1874a02-1c07-4711-8b3e-bb035b261d42 is in state STARTED 2026-03-10 00:54:13.371442 | orchestrator | 2026-03-10 00:54:13 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:13.372638 | orchestrator | 2026-03-10 00:54:13 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:54:13.372726 | orchestrator | 2026-03-10 00:54:13 | INFO  | Task 15a2f4b9-e2dc-49ec-aa23-89d04373cd82 is in state STARTED 2026-03-10 00:54:13.372761 | orchestrator | 2026-03-10 00:54:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:16.436951 | orchestrator | 2026-03-10 00:54:16 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:16.438184 | orchestrator | 2026-03-10 00:54:16 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:16.438420 | orchestrator | 2026-03-10 00:54:16 | INFO  | Task e1874a02-1c07-4711-8b3e-bb035b261d42 is in state SUCCESS 2026-03-10 00:54:16.441514 | orchestrator | 2026-03-10 00:54:16 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:16.444457 | orchestrator | 2026-03-10 00:54:16 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:54:16.446467 | orchestrator | 2026-03-10 00:54:16 | INFO  | Task 15a2f4b9-e2dc-49ec-aa23-89d04373cd82 is in state STARTED 2026-03-10 00:54:16.446493 | orchestrator | 2026-03-10 00:54:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:19.494595 | orchestrator | 2026-03-10 00:54:19 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:19.497724 | orchestrator | 2026-03-10 00:54:19 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:19.500157 | orchestrator | 2026-03-10 00:54:19 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:19.502243 | orchestrator | 2026-03-10 00:54:19 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:54:19.504035 | orchestrator | 2026-03-10 00:54:19 | INFO  | Task 15a2f4b9-e2dc-49ec-aa23-89d04373cd82 is in state STARTED 2026-03-10 00:54:19.504077 | orchestrator | 2026-03-10 00:54:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:22.541844 | orchestrator | 2026-03-10 00:54:22 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:22.542561 | orchestrator | 2026-03-10 00:54:22 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:22.543479 | orchestrator | 2026-03-10 00:54:22 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:22.544539 | orchestrator | 2026-03-10 00:54:22 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:54:22.545007 | orchestrator | 2026-03-10 00:54:22 | INFO  | Task 15a2f4b9-e2dc-49ec-aa23-89d04373cd82 is in state SUCCESS 2026-03-10 00:54:22.545158 | orchestrator | 2026-03-10 00:54:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:25.582474 | orchestrator | 2026-03-10 00:54:25 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:25.585046 | orchestrator | 2026-03-10 00:54:25 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:25.586132 | orchestrator | 2026-03-10 00:54:25 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:25.589102 | orchestrator | 2026-03-10 00:54:25 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:54:25.589145 | orchestrator | 2026-03-10 00:54:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:28.621055 | orchestrator | 2026-03-10 00:54:28 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:28.621127 | orchestrator | 2026-03-10 00:54:28 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:28.621133 | orchestrator | 2026-03-10 00:54:28 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:28.621137 | orchestrator | 2026-03-10 00:54:28 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:54:28.621141 | orchestrator | 2026-03-10 00:54:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:31.663022 | orchestrator | 2026-03-10 00:54:31 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:31.666490 | orchestrator | 2026-03-10 00:54:31 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:31.667856 | orchestrator | 2026-03-10 00:54:31 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:31.669448 | orchestrator | 2026-03-10 00:54:31 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:54:31.669497 | orchestrator | 2026-03-10 00:54:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:34.699191 | orchestrator | 2026-03-10 00:54:34 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:34.700063 | orchestrator | 2026-03-10 00:54:34 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:34.702528 | orchestrator | 2026-03-10 00:54:34 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:34.703352 | orchestrator | 2026-03-10 00:54:34 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:54:34.703377 | orchestrator | 2026-03-10 00:54:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:37.745994 | orchestrator | 2026-03-10 00:54:37 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:37.746154 | orchestrator | 2026-03-10 00:54:37 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:37.746950 | orchestrator | 2026-03-10 00:54:37 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:37.749065 | orchestrator | 2026-03-10 00:54:37 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state STARTED 2026-03-10 00:54:37.749187 | orchestrator | 2026-03-10 00:54:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:40.790896 | orchestrator | 2026-03-10 00:54:40 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:40.791280 | orchestrator | 2026-03-10 00:54:40 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:40.792915 | orchestrator | 2026-03-10 00:54:40 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:40.794440 | orchestrator | 2026-03-10 00:54:40 | INFO  | Task 3c340e90-32fb-4cee-98ac-e41c436d9e0a is in state SUCCESS 2026-03-10 00:54:40.795818 | orchestrator | 2026-03-10 00:54:40.795851 | orchestrator | 2026-03-10 00:54:40.795862 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-10 00:54:40.795873 | orchestrator | 2026-03-10 00:54:40.795883 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-10 00:54:40.795917 | orchestrator | Tuesday 10 March 2026 00:54:12 +0000 (0:00:00.308) 0:00:00.308 ********* 2026-03-10 00:54:40.795927 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-10 00:54:40.795936 | orchestrator | 2026-03-10 00:54:40.795945 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-10 00:54:40.795954 | orchestrator | Tuesday 10 March 2026 00:54:13 +0000 (0:00:00.892) 0:00:01.200 ********* 2026-03-10 00:54:40.795963 | orchestrator | changed: [testbed-manager] 2026-03-10 00:54:40.795972 | orchestrator | 2026-03-10 00:54:40.795980 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-10 00:54:40.795989 | orchestrator | Tuesday 10 March 2026 00:54:14 +0000 (0:00:01.609) 0:00:02.810 ********* 2026-03-10 00:54:40.795998 | orchestrator | changed: [testbed-manager] 2026-03-10 00:54:40.796006 | orchestrator | 2026-03-10 00:54:40.796015 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:54:40.796025 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:54:40.796036 | orchestrator | 2026-03-10 00:54:40.796045 | orchestrator | 2026-03-10 00:54:40.796053 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:54:40.796062 | orchestrator | Tuesday 10 March 2026 00:54:15 +0000 (0:00:00.738) 0:00:03.548 ********* 2026-03-10 00:54:40.796071 | orchestrator | =============================================================================== 2026-03-10 00:54:40.796079 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.61s 2026-03-10 00:54:40.796087 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.89s 2026-03-10 00:54:40.796096 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.74s 2026-03-10 00:54:40.796104 | orchestrator | 2026-03-10 00:54:40.796113 | orchestrator | 2026-03-10 00:54:40.796121 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-10 00:54:40.796130 | orchestrator | 2026-03-10 00:54:40.796138 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-10 00:54:40.796147 | orchestrator | Tuesday 10 March 2026 00:54:11 +0000 (0:00:00.351) 0:00:00.351 ********* 2026-03-10 00:54:40.796155 | orchestrator | ok: [testbed-manager] 2026-03-10 00:54:40.796164 | orchestrator | 2026-03-10 00:54:40.796173 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-10 00:54:40.796181 | orchestrator | Tuesday 10 March 2026 00:54:12 +0000 (0:00:00.966) 0:00:01.318 ********* 2026-03-10 00:54:40.796190 | orchestrator | ok: [testbed-manager] 2026-03-10 00:54:40.796198 | orchestrator | 2026-03-10 00:54:40.796207 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-10 00:54:40.796216 | orchestrator | Tuesday 10 March 2026 00:54:13 +0000 (0:00:00.722) 0:00:02.040 ********* 2026-03-10 00:54:40.796224 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-10 00:54:40.796232 | orchestrator | 2026-03-10 00:54:40.796241 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-10 00:54:40.796249 | orchestrator | Tuesday 10 March 2026 00:54:14 +0000 (0:00:00.820) 0:00:02.860 ********* 2026-03-10 00:54:40.796258 | orchestrator | changed: [testbed-manager] 2026-03-10 00:54:40.796266 | orchestrator | 2026-03-10 00:54:40.796275 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-10 00:54:40.796283 | orchestrator | Tuesday 10 March 2026 00:54:16 +0000 (0:00:02.314) 0:00:05.175 ********* 2026-03-10 00:54:40.796292 | orchestrator | changed: [testbed-manager] 2026-03-10 00:54:40.796300 | orchestrator | 2026-03-10 00:54:40.796328 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-10 00:54:40.796343 | orchestrator | Tuesday 10 March 2026 00:54:17 +0000 (0:00:00.688) 0:00:05.864 ********* 2026-03-10 00:54:40.796356 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-10 00:54:40.796370 | orchestrator | 2026-03-10 00:54:40.796395 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-10 00:54:40.796417 | orchestrator | Tuesday 10 March 2026 00:54:19 +0000 (0:00:01.923) 0:00:07.788 ********* 2026-03-10 00:54:40.796428 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-10 00:54:40.796438 | orchestrator | 2026-03-10 00:54:40.796448 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-10 00:54:40.796458 | orchestrator | Tuesday 10 March 2026 00:54:20 +0000 (0:00:01.247) 0:00:09.035 ********* 2026-03-10 00:54:40.796468 | orchestrator | ok: [testbed-manager] 2026-03-10 00:54:40.796478 | orchestrator | 2026-03-10 00:54:40.796488 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-10 00:54:40.796498 | orchestrator | Tuesday 10 March 2026 00:54:21 +0000 (0:00:00.589) 0:00:09.625 ********* 2026-03-10 00:54:40.796508 | orchestrator | ok: [testbed-manager] 2026-03-10 00:54:40.796518 | orchestrator | 2026-03-10 00:54:40.796527 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:54:40.796538 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:54:40.796548 | orchestrator | 2026-03-10 00:54:40.796558 | orchestrator | 2026-03-10 00:54:40.796568 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:54:40.796578 | orchestrator | Tuesday 10 March 2026 00:54:21 +0000 (0:00:00.316) 0:00:09.942 ********* 2026-03-10 00:54:40.796588 | orchestrator | =============================================================================== 2026-03-10 00:54:40.796598 | orchestrator | Write kubeconfig file --------------------------------------------------- 2.31s 2026-03-10 00:54:40.796608 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.92s 2026-03-10 00:54:40.796618 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 1.25s 2026-03-10 00:54:40.796641 | orchestrator | Get home directory of operator user ------------------------------------- 0.97s 2026-03-10 00:54:40.796651 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.82s 2026-03-10 00:54:40.796662 | orchestrator | Create .kube directory -------------------------------------------------- 0.72s 2026-03-10 00:54:40.796673 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.69s 2026-03-10 00:54:40.796682 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.59s 2026-03-10 00:54:40.796691 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.32s 2026-03-10 00:54:40.796700 | orchestrator | 2026-03-10 00:54:40.796708 | orchestrator | 2026-03-10 00:54:40.796717 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-10 00:54:40.796726 | orchestrator | 2026-03-10 00:54:40.796734 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-10 00:54:40.796743 | orchestrator | Tuesday 10 March 2026 00:52:00 +0000 (0:00:00.143) 0:00:00.143 ********* 2026-03-10 00:54:40.796772 | orchestrator | ok: [localhost] => { 2026-03-10 00:54:40.796782 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-10 00:54:40.796791 | orchestrator | } 2026-03-10 00:54:40.796800 | orchestrator | 2026-03-10 00:54:40.796808 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-10 00:54:40.796817 | orchestrator | Tuesday 10 March 2026 00:52:00 +0000 (0:00:00.098) 0:00:00.242 ********* 2026-03-10 00:54:40.796827 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-10 00:54:40.796838 | orchestrator | ...ignoring 2026-03-10 00:54:40.796847 | orchestrator | 2026-03-10 00:54:40.796855 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-10 00:54:40.796864 | orchestrator | Tuesday 10 March 2026 00:52:04 +0000 (0:00:04.521) 0:00:04.763 ********* 2026-03-10 00:54:40.796873 | orchestrator | skipping: [localhost] 2026-03-10 00:54:40.796890 | orchestrator | 2026-03-10 00:54:40.796899 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-10 00:54:40.796907 | orchestrator | Tuesday 10 March 2026 00:52:05 +0000 (0:00:00.208) 0:00:04.972 ********* 2026-03-10 00:54:40.796916 | orchestrator | ok: [localhost] 2026-03-10 00:54:40.796925 | orchestrator | 2026-03-10 00:54:40.796933 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:54:40.796942 | orchestrator | 2026-03-10 00:54:40.796951 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:54:40.796959 | orchestrator | Tuesday 10 March 2026 00:52:05 +0000 (0:00:00.576) 0:00:05.549 ********* 2026-03-10 00:54:40.796968 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:40.796976 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:40.796985 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:40.796993 | orchestrator | 2026-03-10 00:54:40.797002 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:54:40.797011 | orchestrator | Tuesday 10 March 2026 00:52:06 +0000 (0:00:00.574) 0:00:06.124 ********* 2026-03-10 00:54:40.797020 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-10 00:54:40.797029 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-10 00:54:40.797037 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-10 00:54:40.797046 | orchestrator | 2026-03-10 00:54:40.797054 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-10 00:54:40.797063 | orchestrator | 2026-03-10 00:54:40.797071 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-10 00:54:40.797080 | orchestrator | Tuesday 10 March 2026 00:52:08 +0000 (0:00:02.110) 0:00:08.234 ********* 2026-03-10 00:54:40.797096 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:54:40.797104 | orchestrator | 2026-03-10 00:54:40.797113 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-10 00:54:40.797121 | orchestrator | Tuesday 10 March 2026 00:52:11 +0000 (0:00:03.044) 0:00:11.279 ********* 2026-03-10 00:54:40.797130 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:40.797139 | orchestrator | 2026-03-10 00:54:40.797147 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-10 00:54:40.797156 | orchestrator | Tuesday 10 March 2026 00:52:13 +0000 (0:00:01.597) 0:00:12.877 ********* 2026-03-10 00:54:40.797164 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:40.797173 | orchestrator | 2026-03-10 00:54:40.797182 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-10 00:54:40.797190 | orchestrator | Tuesday 10 March 2026 00:52:13 +0000 (0:00:00.460) 0:00:13.337 ********* 2026-03-10 00:54:40.797199 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:40.797210 | orchestrator | 2026-03-10 00:54:40.797225 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-10 00:54:40.797238 | orchestrator | Tuesday 10 March 2026 00:52:14 +0000 (0:00:00.512) 0:00:13.850 ********* 2026-03-10 00:54:40.797251 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:40.797274 | orchestrator | 2026-03-10 00:54:40.797291 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-10 00:54:40.797303 | orchestrator | Tuesday 10 March 2026 00:52:14 +0000 (0:00:00.501) 0:00:14.351 ********* 2026-03-10 00:54:40.797316 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:40.797329 | orchestrator | 2026-03-10 00:54:40.797343 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-10 00:54:40.797356 | orchestrator | Tuesday 10 March 2026 00:52:15 +0000 (0:00:01.223) 0:00:15.575 ********* 2026-03-10 00:54:40.797369 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:54:40.797381 | orchestrator | 2026-03-10 00:54:40.797393 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-10 00:54:40.797415 | orchestrator | Tuesday 10 March 2026 00:52:16 +0000 (0:00:00.942) 0:00:16.517 ********* 2026-03-10 00:54:40.797461 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:40.797493 | orchestrator | 2026-03-10 00:54:40.797506 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-10 00:54:40.797517 | orchestrator | Tuesday 10 March 2026 00:52:17 +0000 (0:00:00.927) 0:00:17.445 ********* 2026-03-10 00:54:40.797528 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:40.797539 | orchestrator | 2026-03-10 00:54:40.797550 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-10 00:54:40.797560 | orchestrator | Tuesday 10 March 2026 00:52:18 +0000 (0:00:00.445) 0:00:17.890 ********* 2026-03-10 00:54:40.797571 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:40.797582 | orchestrator | 2026-03-10 00:54:40.797593 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-10 00:54:40.797604 | orchestrator | Tuesday 10 March 2026 00:52:18 +0000 (0:00:00.754) 0:00:18.644 ********* 2026-03-10 00:54:40.797623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:54:40.797649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:54:40.797663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:54:40.797683 | orchestrator | 2026-03-10 00:54:40.797694 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-10 00:54:40.797705 | orchestrator | Tuesday 10 March 2026 00:52:21 +0000 (0:00:02.268) 0:00:20.912 ********* 2026-03-10 00:54:40.797727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:54:40.797740 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:54:40.797834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:54:40.797848 | orchestrator | 2026-03-10 00:54:40.797860 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-10 00:54:40.797872 | orchestrator | Tuesday 10 March 2026 00:52:25 +0000 (0:00:04.840) 0:00:25.753 ********* 2026-03-10 00:54:40.797883 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-10 00:54:40.797895 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-10 00:54:40.797914 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-10 00:54:40.797926 | orchestrator | 2026-03-10 00:54:40.797938 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-10 00:54:40.797949 | orchestrator | Tuesday 10 March 2026 00:52:28 +0000 (0:00:02.592) 0:00:28.345 ********* 2026-03-10 00:54:40.797961 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-10 00:54:40.797973 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-10 00:54:40.797984 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-10 00:54:40.797996 | orchestrator | 2026-03-10 00:54:40.798008 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-10 00:54:40.798084 | orchestrator | Tuesday 10 March 2026 00:52:31 +0000 (0:00:03.318) 0:00:31.663 ********* 2026-03-10 00:54:40.798098 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-10 00:54:40.798109 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-10 00:54:40.798120 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-10 00:54:40.798131 | orchestrator | 2026-03-10 00:54:40.798141 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-10 00:54:40.798152 | orchestrator | Tuesday 10 March 2026 00:52:34 +0000 (0:00:03.025) 0:00:34.689 ********* 2026-03-10 00:54:40.798163 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-10 00:54:40.798173 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-10 00:54:40.798184 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-10 00:54:40.798196 | orchestrator | 2026-03-10 00:54:40.798207 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-10 00:54:40.798221 | orchestrator | Tuesday 10 March 2026 00:52:38 +0000 (0:00:03.252) 0:00:37.941 ********* 2026-03-10 00:54:40.798239 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-10 00:54:40.798264 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-10 00:54:40.798285 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-10 00:54:40.798302 | orchestrator | 2026-03-10 00:54:40.798320 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-10 00:54:40.798337 | orchestrator | Tuesday 10 March 2026 00:52:40 +0000 (0:00:02.098) 0:00:40.040 ********* 2026-03-10 00:54:40.798355 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-10 00:54:40.798372 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-10 00:54:40.798391 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-10 00:54:40.798408 | orchestrator | 2026-03-10 00:54:40.798425 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-10 00:54:40.798442 | orchestrator | Tuesday 10 March 2026 00:52:42 +0000 (0:00:02.720) 0:00:42.761 ********* 2026-03-10 00:54:40.798459 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:40.798476 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:40.798493 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:40.798511 | orchestrator | 2026-03-10 00:54:40.798529 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-10 00:54:40.798546 | orchestrator | Tuesday 10 March 2026 00:52:43 +0000 (0:00:00.523) 0:00:43.284 ********* 2026-03-10 00:54:40.798606 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:54:40.798642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:54:40.798664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:54:40.798683 | orchestrator | 2026-03-10 00:54:40.798695 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-10 00:54:40.798706 | orchestrator | Tuesday 10 March 2026 00:52:44 +0000 (0:00:01.492) 0:00:44.777 ********* 2026-03-10 00:54:40.798717 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:40.798728 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:40.798739 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:40.798915 | orchestrator | 2026-03-10 00:54:40.798950 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-10 00:54:40.798962 | orchestrator | Tuesday 10 March 2026 00:52:45 +0000 (0:00:01.006) 0:00:45.783 ********* 2026-03-10 00:54:40.798973 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:40.798984 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:40.799017 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:40.799029 | orchestrator | 2026-03-10 00:54:40.799039 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-10 00:54:40.799050 | orchestrator | Tuesday 10 March 2026 00:52:54 +0000 (0:00:08.103) 0:00:53.887 ********* 2026-03-10 00:54:40.799061 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:40.799070 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:40.799080 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:40.799090 | orchestrator | 2026-03-10 00:54:40.799100 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-10 00:54:40.799109 | orchestrator | 2026-03-10 00:54:40.799119 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-10 00:54:40.799128 | orchestrator | Tuesday 10 March 2026 00:52:54 +0000 (0:00:00.453) 0:00:54.340 ********* 2026-03-10 00:54:40.799138 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:40.799149 | orchestrator | 2026-03-10 00:54:40.799159 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-10 00:54:40.799169 | orchestrator | Tuesday 10 March 2026 00:52:55 +0000 (0:00:00.910) 0:00:55.251 ********* 2026-03-10 00:54:40.799178 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:54:40.799188 | orchestrator | 2026-03-10 00:54:40.799205 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-10 00:54:40.799215 | orchestrator | Tuesday 10 March 2026 00:52:55 +0000 (0:00:00.448) 0:00:55.699 ********* 2026-03-10 00:54:40.799225 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:40.799234 | orchestrator | 2026-03-10 00:54:40.799244 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-10 00:54:40.799254 | orchestrator | Tuesday 10 March 2026 00:53:02 +0000 (0:00:06.843) 0:01:02.543 ********* 2026-03-10 00:54:40.799263 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:54:40.799273 | orchestrator | 2026-03-10 00:54:40.799283 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-10 00:54:40.799292 | orchestrator | 2026-03-10 00:54:40.799302 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-10 00:54:40.799311 | orchestrator | Tuesday 10 March 2026 00:53:54 +0000 (0:00:51.939) 0:01:54.482 ********* 2026-03-10 00:54:40.799320 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:40.799330 | orchestrator | 2026-03-10 00:54:40.799340 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-10 00:54:40.799350 | orchestrator | Tuesday 10 March 2026 00:53:55 +0000 (0:00:00.689) 0:01:55.172 ********* 2026-03-10 00:54:40.799359 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:54:40.799369 | orchestrator | 2026-03-10 00:54:40.799378 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-10 00:54:40.799388 | orchestrator | Tuesday 10 March 2026 00:53:55 +0000 (0:00:00.440) 0:01:55.612 ********* 2026-03-10 00:54:40.799398 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:40.799407 | orchestrator | 2026-03-10 00:54:40.799417 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-10 00:54:40.799426 | orchestrator | Tuesday 10 March 2026 00:53:57 +0000 (0:00:01.871) 0:01:57.484 ********* 2026-03-10 00:54:40.799436 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:54:40.799445 | orchestrator | 2026-03-10 00:54:40.799455 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-10 00:54:40.799464 | orchestrator | 2026-03-10 00:54:40.799474 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-10 00:54:40.799483 | orchestrator | Tuesday 10 March 2026 00:54:15 +0000 (0:00:17.554) 0:02:15.038 ********* 2026-03-10 00:54:40.799493 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:40.799503 | orchestrator | 2026-03-10 00:54:40.799526 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-10 00:54:40.799537 | orchestrator | Tuesday 10 March 2026 00:54:16 +0000 (0:00:00.785) 0:02:15.824 ********* 2026-03-10 00:54:40.799546 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:54:40.799556 | orchestrator | 2026-03-10 00:54:40.799574 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-10 00:54:40.799584 | orchestrator | Tuesday 10 March 2026 00:54:16 +0000 (0:00:00.450) 0:02:16.275 ********* 2026-03-10 00:54:40.799593 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:40.799603 | orchestrator | 2026-03-10 00:54:40.799613 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-10 00:54:40.799622 | orchestrator | Tuesday 10 March 2026 00:54:18 +0000 (0:00:01.832) 0:02:18.107 ********* 2026-03-10 00:54:40.799632 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:54:40.799642 | orchestrator | 2026-03-10 00:54:40.799652 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-10 00:54:40.799661 | orchestrator | 2026-03-10 00:54:40.799671 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-10 00:54:40.799680 | orchestrator | Tuesday 10 March 2026 00:54:34 +0000 (0:00:16.283) 0:02:34.390 ********* 2026-03-10 00:54:40.799690 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:54:40.799699 | orchestrator | 2026-03-10 00:54:40.799709 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-10 00:54:40.799718 | orchestrator | Tuesday 10 March 2026 00:54:35 +0000 (0:00:00.612) 0:02:35.003 ********* 2026-03-10 00:54:40.799728 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:54:40.799737 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:54:40.799766 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:54:40.799777 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-10 00:54:40.799787 | orchestrator | enable_outward_rabbitmq_True 2026-03-10 00:54:40.799797 | orchestrator | 2026-03-10 00:54:40.799806 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-10 00:54:40.799816 | orchestrator | skipping: no hosts matched 2026-03-10 00:54:40.799826 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-10 00:54:40.799836 | orchestrator | outward_rabbitmq_restart 2026-03-10 00:54:40.799845 | orchestrator | 2026-03-10 00:54:40.799855 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-10 00:54:40.799865 | orchestrator | skipping: no hosts matched 2026-03-10 00:54:40.799874 | orchestrator | 2026-03-10 00:54:40.799884 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-10 00:54:40.799893 | orchestrator | skipping: no hosts matched 2026-03-10 00:54:40.799903 | orchestrator | 2026-03-10 00:54:40.799912 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:54:40.799923 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-10 00:54:40.799933 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-10 00:54:40.799943 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:54:40.799953 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 00:54:40.799963 | orchestrator | 2026-03-10 00:54:40.799972 | orchestrator | 2026-03-10 00:54:40.799982 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:54:40.799997 | orchestrator | Tuesday 10 March 2026 00:54:37 +0000 (0:00:02.544) 0:02:37.548 ********* 2026-03-10 00:54:40.800007 | orchestrator | =============================================================================== 2026-03-10 00:54:40.800017 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 85.78s 2026-03-10 00:54:40.800026 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.55s 2026-03-10 00:54:40.800035 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 8.10s 2026-03-10 00:54:40.800053 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 4.84s 2026-03-10 00:54:40.800063 | orchestrator | Check RabbitMQ service -------------------------------------------------- 4.52s 2026-03-10 00:54:40.800072 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 3.32s 2026-03-10 00:54:40.800082 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 3.25s 2026-03-10 00:54:40.800091 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 3.04s 2026-03-10 00:54:40.800101 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 3.03s 2026-03-10 00:54:40.800110 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.72s 2026-03-10 00:54:40.800120 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.59s 2026-03-10 00:54:40.800129 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.55s 2026-03-10 00:54:40.800139 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.39s 2026-03-10 00:54:40.800148 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 2.27s 2026-03-10 00:54:40.800158 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.11s 2026-03-10 00:54:40.800168 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 2.10s 2026-03-10 00:54:40.800177 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.60s 2026-03-10 00:54:40.800192 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.49s 2026-03-10 00:54:40.800202 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.34s 2026-03-10 00:54:40.800212 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.22s 2026-03-10 00:54:40.800222 | orchestrator | 2026-03-10 00:54:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:43.853452 | orchestrator | 2026-03-10 00:54:43 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:43.856209 | orchestrator | 2026-03-10 00:54:43 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:43.859595 | orchestrator | 2026-03-10 00:54:43 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:43.859655 | orchestrator | 2026-03-10 00:54:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:46.912264 | orchestrator | 2026-03-10 00:54:46 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:46.916343 | orchestrator | 2026-03-10 00:54:46 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:46.919694 | orchestrator | 2026-03-10 00:54:46 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:46.919988 | orchestrator | 2026-03-10 00:54:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:49.970896 | orchestrator | 2026-03-10 00:54:49 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:49.975289 | orchestrator | 2026-03-10 00:54:49 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:49.975525 | orchestrator | 2026-03-10 00:54:49 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:49.975548 | orchestrator | 2026-03-10 00:54:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:53.026361 | orchestrator | 2026-03-10 00:54:53 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:53.028899 | orchestrator | 2026-03-10 00:54:53 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:53.031644 | orchestrator | 2026-03-10 00:54:53 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:53.032874 | orchestrator | 2026-03-10 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:56.085871 | orchestrator | 2026-03-10 00:54:56 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:56.089256 | orchestrator | 2026-03-10 00:54:56 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:56.093335 | orchestrator | 2026-03-10 00:54:56 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:56.093458 | orchestrator | 2026-03-10 00:54:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:54:59.138661 | orchestrator | 2026-03-10 00:54:59 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:54:59.140008 | orchestrator | 2026-03-10 00:54:59 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:54:59.144693 | orchestrator | 2026-03-10 00:54:59 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:54:59.144798 | orchestrator | 2026-03-10 00:54:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:02.203835 | orchestrator | 2026-03-10 00:55:02 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:02.205678 | orchestrator | 2026-03-10 00:55:02 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:02.208042 | orchestrator | 2026-03-10 00:55:02 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:02.208082 | orchestrator | 2026-03-10 00:55:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:05.253551 | orchestrator | 2026-03-10 00:55:05 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:05.254150 | orchestrator | 2026-03-10 00:55:05 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:05.256281 | orchestrator | 2026-03-10 00:55:05 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:05.256311 | orchestrator | 2026-03-10 00:55:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:08.307061 | orchestrator | 2026-03-10 00:55:08 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:08.310670 | orchestrator | 2026-03-10 00:55:08 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:08.314087 | orchestrator | 2026-03-10 00:55:08 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:08.314165 | orchestrator | 2026-03-10 00:55:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:11.351063 | orchestrator | 2026-03-10 00:55:11 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:11.353600 | orchestrator | 2026-03-10 00:55:11 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:11.354276 | orchestrator | 2026-03-10 00:55:11 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:11.354558 | orchestrator | 2026-03-10 00:55:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:14.397206 | orchestrator | 2026-03-10 00:55:14 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:14.400997 | orchestrator | 2026-03-10 00:55:14 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:14.402552 | orchestrator | 2026-03-10 00:55:14 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:14.403201 | orchestrator | 2026-03-10 00:55:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:17.456063 | orchestrator | 2026-03-10 00:55:17 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:17.456220 | orchestrator | 2026-03-10 00:55:17 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:17.460202 | orchestrator | 2026-03-10 00:55:17 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:17.460261 | orchestrator | 2026-03-10 00:55:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:20.523412 | orchestrator | 2026-03-10 00:55:20 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:20.523532 | orchestrator | 2026-03-10 00:55:20 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:20.523549 | orchestrator | 2026-03-10 00:55:20 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:20.523561 | orchestrator | 2026-03-10 00:55:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:23.550234 | orchestrator | 2026-03-10 00:55:23 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:23.552592 | orchestrator | 2026-03-10 00:55:23 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:23.553196 | orchestrator | 2026-03-10 00:55:23 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:23.553219 | orchestrator | 2026-03-10 00:55:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:26.595877 | orchestrator | 2026-03-10 00:55:26 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:26.600059 | orchestrator | 2026-03-10 00:55:26 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:26.604196 | orchestrator | 2026-03-10 00:55:26 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:26.605026 | orchestrator | 2026-03-10 00:55:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:29.648762 | orchestrator | 2026-03-10 00:55:29 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:29.652572 | orchestrator | 2026-03-10 00:55:29 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:29.655993 | orchestrator | 2026-03-10 00:55:29 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:29.657800 | orchestrator | 2026-03-10 00:55:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:32.708394 | orchestrator | 2026-03-10 00:55:32 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:32.710365 | orchestrator | 2026-03-10 00:55:32 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:32.712024 | orchestrator | 2026-03-10 00:55:32 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:32.712175 | orchestrator | 2026-03-10 00:55:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:35.765009 | orchestrator | 2026-03-10 00:55:35 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:35.765643 | orchestrator | 2026-03-10 00:55:35 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:35.766683 | orchestrator | 2026-03-10 00:55:35 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:35.766729 | orchestrator | 2026-03-10 00:55:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:38.799448 | orchestrator | 2026-03-10 00:55:38 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:38.800470 | orchestrator | 2026-03-10 00:55:38 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:38.802197 | orchestrator | 2026-03-10 00:55:38 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:38.803297 | orchestrator | 2026-03-10 00:55:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:41.848141 | orchestrator | 2026-03-10 00:55:41 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:41.850303 | orchestrator | 2026-03-10 00:55:41 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:41.851248 | orchestrator | 2026-03-10 00:55:41 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:41.851293 | orchestrator | 2026-03-10 00:55:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:44.905056 | orchestrator | 2026-03-10 00:55:44 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:44.906398 | orchestrator | 2026-03-10 00:55:44 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:44.908760 | orchestrator | 2026-03-10 00:55:44 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state STARTED 2026-03-10 00:55:44.908810 | orchestrator | 2026-03-10 00:55:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:47.941381 | orchestrator | 2026-03-10 00:55:47 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:47.942917 | orchestrator | 2026-03-10 00:55:47 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:47.947032 | orchestrator | 2026-03-10 00:55:47 | INFO  | Task dfa69a1b-e8b4-433d-9ae8-4ef4f631c026 is in state SUCCESS 2026-03-10 00:55:47.947809 | orchestrator | 2026-03-10 00:55:47.947869 | orchestrator | 2026-03-10 00:55:47.947875 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:55:47.947881 | orchestrator | 2026-03-10 00:55:47.947885 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:55:47.947890 | orchestrator | Tuesday 10 March 2026 00:53:02 +0000 (0:00:00.362) 0:00:00.363 ********* 2026-03-10 00:55:47.947894 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:55:47.947913 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:55:47.947917 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:55:47.947921 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.947925 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.947930 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.947934 | orchestrator | 2026-03-10 00:55:47.947938 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:55:47.947954 | orchestrator | Tuesday 10 March 2026 00:53:03 +0000 (0:00:00.972) 0:00:01.335 ********* 2026-03-10 00:55:47.947958 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-10 00:55:47.947963 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-10 00:55:47.947967 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-10 00:55:47.947971 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-10 00:55:47.947975 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-10 00:55:47.947979 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-10 00:55:47.947983 | orchestrator | 2026-03-10 00:55:47.947988 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-10 00:55:47.947992 | orchestrator | 2026-03-10 00:55:47.947996 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-10 00:55:47.948000 | orchestrator | Tuesday 10 March 2026 00:53:04 +0000 (0:00:00.970) 0:00:02.306 ********* 2026-03-10 00:55:47.948019 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:55:47.948025 | orchestrator | 2026-03-10 00:55:47.948029 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-10 00:55:47.948034 | orchestrator | Tuesday 10 March 2026 00:53:05 +0000 (0:00:01.379) 0:00:03.685 ********* 2026-03-10 00:55:47.948039 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948046 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948050 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948054 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948062 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948067 | orchestrator | 2026-03-10 00:55:47.948082 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-10 00:55:47.948086 | orchestrator | Tuesday 10 March 2026 00:53:07 +0000 (0:00:01.833) 0:00:05.518 ********* 2026-03-10 00:55:47.948092 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948096 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948112 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948115 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948119 | orchestrator | 2026-03-10 00:55:47.948123 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-10 00:55:47.948127 | orchestrator | Tuesday 10 March 2026 00:53:09 +0000 (0:00:02.657) 0:00:08.176 ********* 2026-03-10 00:55:47.948131 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948143 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948148 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948195 | orchestrator | 2026-03-10 00:55:47.948234 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-10 00:55:47.948239 | orchestrator | Tuesday 10 March 2026 00:53:11 +0000 (0:00:02.086) 0:00:10.263 ********* 2026-03-10 00:55:47.948242 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948246 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948250 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948254 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948266 | orchestrator | 2026-03-10 00:55:47.948273 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-10 00:55:47.948280 | orchestrator | Tuesday 10 March 2026 00:53:13 +0000 (0:00:01.989) 0:00:12.253 ********* 2026-03-10 00:55:47.948287 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948292 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948295 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948304 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948309 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.948313 | orchestrator | 2026-03-10 00:55:47.948317 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-10 00:55:47.948322 | orchestrator | Tuesday 10 March 2026 00:53:15 +0000 (0:00:01.830) 0:00:14.084 ********* 2026-03-10 00:55:47.948326 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:55:47.948331 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:55:47.948335 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:55:47.948339 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:55:47.948343 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:55:47.948348 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:55:47.948352 | orchestrator | 2026-03-10 00:55:47.948357 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-10 00:55:47.948361 | orchestrator | Tuesday 10 March 2026 00:53:18 +0000 (0:00:02.567) 0:00:16.651 ********* 2026-03-10 00:55:47.948365 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-10 00:55:47.948370 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-10 00:55:47.948374 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-10 00:55:47.948382 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-10 00:55:47.948387 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-10 00:55:47.948391 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-10 00:55:47.948395 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-10 00:55:47.948399 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-10 00:55:47.948406 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-10 00:55:47.948411 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-10 00:55:47.948415 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-10 00:55:47.948420 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-10 00:55:47.948424 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-10 00:55:47.948432 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-10 00:55:47.948436 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-10 00:55:47.948441 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-10 00:55:47.948445 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-10 00:55:47.948450 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-10 00:55:47.948455 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-10 00:55:47.948459 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-10 00:55:47.948463 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-10 00:55:47.948468 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-10 00:55:47.948472 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-10 00:55:47.948476 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-10 00:55:47.948481 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-10 00:55:47.948485 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-10 00:55:47.948489 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-10 00:55:47.948494 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-10 00:55:47.948498 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-10 00:55:47.948502 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-10 00:55:47.948506 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-10 00:55:47.948509 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-10 00:55:47.948514 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-10 00:55:47.948521 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-10 00:55:47.948525 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-10 00:55:47.948529 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-10 00:55:47.948532 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-10 00:55:47.948536 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-10 00:55:47.948540 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-10 00:55:47.948544 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-10 00:55:47.948548 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-10 00:55:47.948552 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-10 00:55:47.948556 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-10 00:55:47.948560 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-10 00:55:47.948566 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-10 00:55:47.948570 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-10 00:55:47.948573 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-10 00:55:47.948577 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-10 00:55:47.948583 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-10 00:55:47.948587 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-10 00:55:47.948591 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-10 00:55:47.948595 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-10 00:55:47.948599 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-10 00:55:47.948603 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-10 00:55:47.948607 | orchestrator | 2026-03-10 00:55:47.948610 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-10 00:55:47.948614 | orchestrator | Tuesday 10 March 2026 00:53:41 +0000 (0:00:22.868) 0:00:39.520 ********* 2026-03-10 00:55:47.948618 | orchestrator | 2026-03-10 00:55:47.948622 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-10 00:55:47.948626 | orchestrator | Tuesday 10 March 2026 00:53:41 +0000 (0:00:00.154) 0:00:39.674 ********* 2026-03-10 00:55:47.948629 | orchestrator | 2026-03-10 00:55:47.948633 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-10 00:55:47.948637 | orchestrator | Tuesday 10 March 2026 00:53:41 +0000 (0:00:00.134) 0:00:39.809 ********* 2026-03-10 00:55:47.948641 | orchestrator | 2026-03-10 00:55:47.948645 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-10 00:55:47.948652 | orchestrator | Tuesday 10 March 2026 00:53:41 +0000 (0:00:00.106) 0:00:39.915 ********* 2026-03-10 00:55:47.948656 | orchestrator | 2026-03-10 00:55:47.948660 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-10 00:55:47.948664 | orchestrator | Tuesday 10 March 2026 00:53:41 +0000 (0:00:00.136) 0:00:40.052 ********* 2026-03-10 00:55:47.948667 | orchestrator | 2026-03-10 00:55:47.948685 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-10 00:55:47.948689 | orchestrator | Tuesday 10 March 2026 00:53:41 +0000 (0:00:00.096) 0:00:40.148 ********* 2026-03-10 00:55:47.948693 | orchestrator | 2026-03-10 00:55:47.948697 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-10 00:55:47.948700 | orchestrator | Tuesday 10 March 2026 00:53:41 +0000 (0:00:00.077) 0:00:40.226 ********* 2026-03-10 00:55:47.948704 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.948708 | orchestrator | ok: [testbed-node-5] 2026-03-10 00:55:47.948712 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.948716 | orchestrator | ok: [testbed-node-3] 2026-03-10 00:55:47.948720 | orchestrator | ok: [testbed-node-4] 2026-03-10 00:55:47.948724 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.948727 | orchestrator | 2026-03-10 00:55:47.948731 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-10 00:55:47.948735 | orchestrator | Tuesday 10 March 2026 00:53:44 +0000 (0:00:02.181) 0:00:42.408 ********* 2026-03-10 00:55:47.948739 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:55:47.948743 | orchestrator | changed: [testbed-node-3] 2026-03-10 00:55:47.948747 | orchestrator | changed: [testbed-node-4] 2026-03-10 00:55:47.948751 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:55:47.948755 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:55:47.948758 | orchestrator | changed: [testbed-node-5] 2026-03-10 00:55:47.948762 | orchestrator | 2026-03-10 00:55:47.948766 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-10 00:55:47.948770 | orchestrator | 2026-03-10 00:55:47.948773 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-10 00:55:47.948777 | orchestrator | Tuesday 10 March 2026 00:54:15 +0000 (0:00:31.656) 0:01:14.064 ********* 2026-03-10 00:55:47.948781 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:55:47.948785 | orchestrator | 2026-03-10 00:55:47.948788 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-10 00:55:47.948792 | orchestrator | Tuesday 10 March 2026 00:54:16 +0000 (0:00:01.216) 0:01:15.280 ********* 2026-03-10 00:55:47.948796 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:55:47.948800 | orchestrator | 2026-03-10 00:55:47.948803 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-10 00:55:47.948807 | orchestrator | Tuesday 10 March 2026 00:54:17 +0000 (0:00:00.623) 0:01:15.903 ********* 2026-03-10 00:55:47.948811 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.948815 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.948819 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.948822 | orchestrator | 2026-03-10 00:55:47.948827 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-10 00:55:47.948830 | orchestrator | Tuesday 10 March 2026 00:54:18 +0000 (0:00:01.101) 0:01:17.005 ********* 2026-03-10 00:55:47.948834 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.948838 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.948842 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.948848 | orchestrator | 2026-03-10 00:55:47.948852 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-10 00:55:47.948856 | orchestrator | Tuesday 10 March 2026 00:54:19 +0000 (0:00:00.380) 0:01:17.385 ********* 2026-03-10 00:55:47.948860 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.948864 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.948871 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.948875 | orchestrator | 2026-03-10 00:55:47.948878 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-10 00:55:47.948882 | orchestrator | Tuesday 10 March 2026 00:54:19 +0000 (0:00:00.507) 0:01:17.893 ********* 2026-03-10 00:55:47.948886 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.948890 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.948893 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.948897 | orchestrator | 2026-03-10 00:55:47.948907 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-10 00:55:47.948911 | orchestrator | Tuesday 10 March 2026 00:54:20 +0000 (0:00:00.712) 0:01:18.606 ********* 2026-03-10 00:55:47.948915 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.948919 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.948922 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.948926 | orchestrator | 2026-03-10 00:55:47.948930 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-10 00:55:47.948934 | orchestrator | Tuesday 10 March 2026 00:54:21 +0000 (0:00:01.037) 0:01:19.643 ********* 2026-03-10 00:55:47.948938 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.948941 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.948945 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.948949 | orchestrator | 2026-03-10 00:55:47.948952 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-10 00:55:47.948956 | orchestrator | Tuesday 10 March 2026 00:54:21 +0000 (0:00:00.518) 0:01:20.161 ********* 2026-03-10 00:55:47.948960 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.948964 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.948968 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.948971 | orchestrator | 2026-03-10 00:55:47.948975 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-10 00:55:47.948979 | orchestrator | Tuesday 10 March 2026 00:54:22 +0000 (0:00:00.316) 0:01:20.478 ********* 2026-03-10 00:55:47.948983 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.948987 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.948990 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.948994 | orchestrator | 2026-03-10 00:55:47.948998 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-10 00:55:47.949002 | orchestrator | Tuesday 10 March 2026 00:54:22 +0000 (0:00:00.343) 0:01:20.821 ********* 2026-03-10 00:55:47.949005 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949009 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949013 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949016 | orchestrator | 2026-03-10 00:55:47.949021 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-10 00:55:47.949024 | orchestrator | Tuesday 10 March 2026 00:54:23 +0000 (0:00:00.485) 0:01:21.306 ********* 2026-03-10 00:55:47.949028 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949032 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949036 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949039 | orchestrator | 2026-03-10 00:55:47.949043 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-10 00:55:47.949047 | orchestrator | Tuesday 10 March 2026 00:54:23 +0000 (0:00:00.295) 0:01:21.602 ********* 2026-03-10 00:55:47.949051 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949054 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949058 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949062 | orchestrator | 2026-03-10 00:55:47.949065 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-10 00:55:47.949069 | orchestrator | Tuesday 10 March 2026 00:54:23 +0000 (0:00:00.378) 0:01:21.980 ********* 2026-03-10 00:55:47.949073 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949077 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949081 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949088 | orchestrator | 2026-03-10 00:55:47.949091 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-10 00:55:47.949095 | orchestrator | Tuesday 10 March 2026 00:54:23 +0000 (0:00:00.288) 0:01:22.268 ********* 2026-03-10 00:55:47.949099 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949103 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949106 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949110 | orchestrator | 2026-03-10 00:55:47.949114 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-10 00:55:47.949118 | orchestrator | Tuesday 10 March 2026 00:54:24 +0000 (0:00:00.559) 0:01:22.828 ********* 2026-03-10 00:55:47.949121 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949125 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949129 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949133 | orchestrator | 2026-03-10 00:55:47.949137 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-10 00:55:47.949141 | orchestrator | Tuesday 10 March 2026 00:54:24 +0000 (0:00:00.308) 0:01:23.136 ********* 2026-03-10 00:55:47.949144 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949148 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949152 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949155 | orchestrator | 2026-03-10 00:55:47.949159 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-10 00:55:47.949163 | orchestrator | Tuesday 10 March 2026 00:54:25 +0000 (0:00:00.338) 0:01:23.475 ********* 2026-03-10 00:55:47.949167 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949171 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949174 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949178 | orchestrator | 2026-03-10 00:55:47.949182 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-10 00:55:47.949186 | orchestrator | Tuesday 10 March 2026 00:54:25 +0000 (0:00:00.437) 0:01:23.912 ********* 2026-03-10 00:55:47.949191 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949195 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949201 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949205 | orchestrator | 2026-03-10 00:55:47.949209 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-10 00:55:47.949213 | orchestrator | Tuesday 10 March 2026 00:54:25 +0000 (0:00:00.354) 0:01:24.267 ********* 2026-03-10 00:55:47.949217 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:55:47.949221 | orchestrator | 2026-03-10 00:55:47.949224 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-10 00:55:47.949228 | orchestrator | Tuesday 10 March 2026 00:54:27 +0000 (0:00:01.101) 0:01:25.368 ********* 2026-03-10 00:55:47.949232 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.949236 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.949243 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.949247 | orchestrator | 2026-03-10 00:55:47.949251 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-10 00:55:47.949255 | orchestrator | Tuesday 10 March 2026 00:54:27 +0000 (0:00:00.580) 0:01:25.949 ********* 2026-03-10 00:55:47.949259 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.949262 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.949266 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.949270 | orchestrator | 2026-03-10 00:55:47.949274 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-10 00:55:47.949278 | orchestrator | Tuesday 10 March 2026 00:54:28 +0000 (0:00:00.590) 0:01:26.539 ********* 2026-03-10 00:55:47.949283 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949286 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949290 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949294 | orchestrator | 2026-03-10 00:55:47.949298 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-10 00:55:47.949306 | orchestrator | Tuesday 10 March 2026 00:54:28 +0000 (0:00:00.699) 0:01:27.239 ********* 2026-03-10 00:55:47.949310 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949314 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949317 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949321 | orchestrator | 2026-03-10 00:55:47.949325 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-10 00:55:47.949329 | orchestrator | Tuesday 10 March 2026 00:54:29 +0000 (0:00:00.437) 0:01:27.676 ********* 2026-03-10 00:55:47.949333 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949336 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949340 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949344 | orchestrator | 2026-03-10 00:55:47.949348 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-10 00:55:47.949352 | orchestrator | Tuesday 10 March 2026 00:54:29 +0000 (0:00:00.408) 0:01:28.085 ********* 2026-03-10 00:55:47.949355 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949359 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949363 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949367 | orchestrator | 2026-03-10 00:55:47.949370 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-10 00:55:47.949374 | orchestrator | Tuesday 10 March 2026 00:54:30 +0000 (0:00:00.537) 0:01:28.622 ********* 2026-03-10 00:55:47.949378 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949382 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949386 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949389 | orchestrator | 2026-03-10 00:55:47.949393 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-10 00:55:47.949397 | orchestrator | Tuesday 10 March 2026 00:54:31 +0000 (0:00:00.695) 0:01:29.318 ********* 2026-03-10 00:55:47.949400 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949404 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949408 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949412 | orchestrator | 2026-03-10 00:55:47.949416 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-10 00:55:47.949420 | orchestrator | Tuesday 10 March 2026 00:54:31 +0000 (0:00:00.508) 0:01:29.827 ********* 2026-03-10 00:55:47.949424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949435 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949625 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949629 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949632 | orchestrator | 2026-03-10 00:55:47.949636 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-10 00:55:47.949640 | orchestrator | Tuesday 10 March 2026 00:54:33 +0000 (0:00:01.849) 0:01:31.676 ********* 2026-03-10 00:55:47.949644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949656 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949694 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949702 | orchestrator | 2026-03-10 00:55:47.949706 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-10 00:55:47.949710 | orchestrator | Tuesday 10 March 2026 00:54:38 +0000 (0:00:04.622) 0:01:36.299 ********* 2026-03-10 00:55:47.949714 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949726 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949743 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.949758 | orchestrator | 2026-03-10 00:55:47.949762 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-10 00:55:47.949765 | orchestrator | Tuesday 10 March 2026 00:54:40 +0000 (0:00:02.646) 0:01:38.946 ********* 2026-03-10 00:55:47.949769 | orchestrator | 2026-03-10 00:55:47.949773 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-10 00:55:47.949777 | orchestrator | Tuesday 10 March 2026 00:54:40 +0000 (0:00:00.094) 0:01:39.040 ********* 2026-03-10 00:55:47.949780 | orchestrator | 2026-03-10 00:55:47.949784 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-10 00:55:47.949788 | orchestrator | Tuesday 10 March 2026 00:54:40 +0000 (0:00:00.133) 0:01:39.174 ********* 2026-03-10 00:55:47.949792 | orchestrator | 2026-03-10 00:55:47.949796 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-10 00:55:47.949800 | orchestrator | Tuesday 10 March 2026 00:54:40 +0000 (0:00:00.082) 0:01:39.256 ********* 2026-03-10 00:55:47.949803 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:55:47.949807 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:55:47.949811 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:55:47.949815 | orchestrator | 2026-03-10 00:55:47.949819 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-10 00:55:47.949823 | orchestrator | Tuesday 10 March 2026 00:54:49 +0000 (0:00:08.342) 0:01:47.599 ********* 2026-03-10 00:55:47.949826 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:55:47.949830 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:55:47.949834 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:55:47.949838 | orchestrator | 2026-03-10 00:55:47.949842 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-10 00:55:47.949845 | orchestrator | Tuesday 10 March 2026 00:54:56 +0000 (0:00:07.456) 0:01:55.056 ********* 2026-03-10 00:55:47.949849 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:55:47.949853 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:55:47.949857 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:55:47.949864 | orchestrator | 2026-03-10 00:55:47.949868 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-10 00:55:47.949872 | orchestrator | Tuesday 10 March 2026 00:55:05 +0000 (0:00:08.757) 0:02:03.813 ********* 2026-03-10 00:55:47.949876 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.949880 | orchestrator | 2026-03-10 00:55:47.949883 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-10 00:55:47.949887 | orchestrator | Tuesday 10 March 2026 00:55:05 +0000 (0:00:00.270) 0:02:04.084 ********* 2026-03-10 00:55:47.949891 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.949895 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.949899 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.949902 | orchestrator | 2026-03-10 00:55:47.949906 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-10 00:55:47.949910 | orchestrator | Tuesday 10 March 2026 00:55:07 +0000 (0:00:01.301) 0:02:05.385 ********* 2026-03-10 00:55:47.949914 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949917 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949921 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:55:47.949925 | orchestrator | 2026-03-10 00:55:47.949928 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-10 00:55:47.949932 | orchestrator | Tuesday 10 March 2026 00:55:07 +0000 (0:00:00.759) 0:02:06.145 ********* 2026-03-10 00:55:47.949936 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.949940 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.949944 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.949947 | orchestrator | 2026-03-10 00:55:47.949951 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-10 00:55:47.949955 | orchestrator | Tuesday 10 March 2026 00:55:08 +0000 (0:00:01.060) 0:02:07.205 ********* 2026-03-10 00:55:47.949959 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.949962 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.949966 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:55:47.949970 | orchestrator | 2026-03-10 00:55:47.949974 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-10 00:55:47.949978 | orchestrator | Tuesday 10 March 2026 00:55:09 +0000 (0:00:00.882) 0:02:08.087 ********* 2026-03-10 00:55:47.949981 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.949985 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.949991 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.949995 | orchestrator | 2026-03-10 00:55:47.949999 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-10 00:55:47.950003 | orchestrator | Tuesday 10 March 2026 00:55:11 +0000 (0:00:01.323) 0:02:09.411 ********* 2026-03-10 00:55:47.950007 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.950010 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.950014 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.950055 | orchestrator | 2026-03-10 00:55:47.950059 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-10 00:55:47.950063 | orchestrator | Tuesday 10 March 2026 00:55:12 +0000 (0:00:01.287) 0:02:10.698 ********* 2026-03-10 00:55:47.950067 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.950071 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.950077 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.950081 | orchestrator | 2026-03-10 00:55:47.950085 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-10 00:55:47.950089 | orchestrator | Tuesday 10 March 2026 00:55:12 +0000 (0:00:00.564) 0:02:11.263 ********* 2026-03-10 00:55:47.950093 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950097 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950107 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950111 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950115 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950119 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950123 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950127 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950134 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950139 | orchestrator | 2026-03-10 00:55:47.950143 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-10 00:55:47.950146 | orchestrator | Tuesday 10 March 2026 00:55:14 +0000 (0:00:01.746) 0:02:13.010 ********* 2026-03-10 00:55:47.950153 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950157 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950165 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950169 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950182 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950193 | orchestrator | 2026-03-10 00:55:47.950197 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-10 00:55:47.950201 | orchestrator | Tuesday 10 March 2026 00:55:19 +0000 (0:00:04.603) 0:02:17.613 ********* 2026-03-10 00:55:47.950208 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950216 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950224 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950233 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950238 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950242 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950247 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950251 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 00:55:47.950255 | orchestrator | 2026-03-10 00:55:47.950260 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-10 00:55:47.950264 | orchestrator | Tuesday 10 March 2026 00:55:22 +0000 (0:00:03.068) 0:02:20.681 ********* 2026-03-10 00:55:47.950268 | orchestrator | 2026-03-10 00:55:47.950273 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-10 00:55:47.950277 | orchestrator | Tuesday 10 March 2026 00:55:22 +0000 (0:00:00.124) 0:02:20.806 ********* 2026-03-10 00:55:47.950281 | orchestrator | 2026-03-10 00:55:47.950287 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-10 00:55:47.950294 | orchestrator | Tuesday 10 March 2026 00:55:22 +0000 (0:00:00.156) 0:02:20.963 ********* 2026-03-10 00:55:47.950300 | orchestrator | 2026-03-10 00:55:47.950306 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-10 00:55:47.950312 | orchestrator | Tuesday 10 March 2026 00:55:22 +0000 (0:00:00.104) 0:02:21.067 ********* 2026-03-10 00:55:47.950323 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:55:47.950329 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:55:47.950336 | orchestrator | 2026-03-10 00:55:47.950346 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-10 00:55:47.950353 | orchestrator | Tuesday 10 March 2026 00:55:29 +0000 (0:00:06.394) 0:02:27.462 ********* 2026-03-10 00:55:47.950360 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:55:47.950368 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:55:47.950372 | orchestrator | 2026-03-10 00:55:47.950375 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-10 00:55:47.950379 | orchestrator | Tuesday 10 March 2026 00:55:35 +0000 (0:00:06.285) 0:02:33.747 ********* 2026-03-10 00:55:47.950383 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:55:47.950387 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:55:47.950391 | orchestrator | 2026-03-10 00:55:47.950395 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-10 00:55:47.950402 | orchestrator | Tuesday 10 March 2026 00:55:42 +0000 (0:00:06.841) 0:02:40.589 ********* 2026-03-10 00:55:47.950406 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:55:47.950409 | orchestrator | 2026-03-10 00:55:47.950413 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-10 00:55:47.950417 | orchestrator | Tuesday 10 March 2026 00:55:42 +0000 (0:00:00.193) 0:02:40.783 ********* 2026-03-10 00:55:47.950420 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.950424 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.950428 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.950432 | orchestrator | 2026-03-10 00:55:47.950435 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-10 00:55:47.950439 | orchestrator | Tuesday 10 March 2026 00:55:43 +0000 (0:00:00.832) 0:02:41.615 ********* 2026-03-10 00:55:47.950443 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.950447 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.950450 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:55:47.950454 | orchestrator | 2026-03-10 00:55:47.950458 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-10 00:55:47.950462 | orchestrator | Tuesday 10 March 2026 00:55:43 +0000 (0:00:00.654) 0:02:42.270 ********* 2026-03-10 00:55:47.950465 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.950469 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.950473 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.950477 | orchestrator | 2026-03-10 00:55:47.950480 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-10 00:55:47.950484 | orchestrator | Tuesday 10 March 2026 00:55:44 +0000 (0:00:00.808) 0:02:43.078 ********* 2026-03-10 00:55:47.950488 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:55:47.950491 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:55:47.950495 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:55:47.950499 | orchestrator | 2026-03-10 00:55:47.950502 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-10 00:55:47.950506 | orchestrator | Tuesday 10 March 2026 00:55:45 +0000 (0:00:00.663) 0:02:43.741 ********* 2026-03-10 00:55:47.950510 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.950513 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.950517 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.950521 | orchestrator | 2026-03-10 00:55:47.950524 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-10 00:55:47.950528 | orchestrator | Tuesday 10 March 2026 00:55:46 +0000 (0:00:00.804) 0:02:44.546 ********* 2026-03-10 00:55:47.950532 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:55:47.950536 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:55:47.950539 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:55:47.950543 | orchestrator | 2026-03-10 00:55:47.950547 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:55:47.950550 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-10 00:55:47.950559 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-10 00:55:47.950563 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-10 00:55:47.950567 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:55:47.950571 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:55:47.950575 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 00:55:47.950579 | orchestrator | 2026-03-10 00:55:47.950582 | orchestrator | 2026-03-10 00:55:47.950586 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:55:47.950590 | orchestrator | Tuesday 10 March 2026 00:55:47 +0000 (0:00:00.988) 0:02:45.534 ********* 2026-03-10 00:55:47.950594 | orchestrator | =============================================================================== 2026-03-10 00:55:47.950597 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 31.66s 2026-03-10 00:55:47.950601 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 22.87s 2026-03-10 00:55:47.950605 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 15.60s 2026-03-10 00:55:47.950609 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.74s 2026-03-10 00:55:47.950613 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.74s 2026-03-10 00:55:47.950616 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.62s 2026-03-10 00:55:47.950620 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.60s 2026-03-10 00:55:47.950627 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.07s 2026-03-10 00:55:47.950631 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.66s 2026-03-10 00:55:47.950634 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.65s 2026-03-10 00:55:47.950638 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.57s 2026-03-10 00:55:47.950642 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.18s 2026-03-10 00:55:47.950645 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 2.09s 2026-03-10 00:55:47.950649 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.99s 2026-03-10 00:55:47.950653 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.85s 2026-03-10 00:55:47.950657 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.83s 2026-03-10 00:55:47.950661 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.83s 2026-03-10 00:55:47.950725 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.75s 2026-03-10 00:55:47.950732 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.38s 2026-03-10 00:55:47.950736 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.32s 2026-03-10 00:55:47.950741 | orchestrator | 2026-03-10 00:55:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:50.992103 | orchestrator | 2026-03-10 00:55:50 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:50.992720 | orchestrator | 2026-03-10 00:55:50 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:50.992747 | orchestrator | 2026-03-10 00:55:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:54.027837 | orchestrator | 2026-03-10 00:55:54 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:54.029499 | orchestrator | 2026-03-10 00:55:54 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:54.029533 | orchestrator | 2026-03-10 00:55:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:55:57.066914 | orchestrator | 2026-03-10 00:55:57 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:55:57.068205 | orchestrator | 2026-03-10 00:55:57 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:55:57.068252 | orchestrator | 2026-03-10 00:55:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:00.109052 | orchestrator | 2026-03-10 00:56:00 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:00.111028 | orchestrator | 2026-03-10 00:56:00 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:00.111099 | orchestrator | 2026-03-10 00:56:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:03.166298 | orchestrator | 2026-03-10 00:56:03 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:03.169116 | orchestrator | 2026-03-10 00:56:03 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:03.169243 | orchestrator | 2026-03-10 00:56:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:06.229513 | orchestrator | 2026-03-10 00:56:06 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:06.231970 | orchestrator | 2026-03-10 00:56:06 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:06.232363 | orchestrator | 2026-03-10 00:56:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:09.278008 | orchestrator | 2026-03-10 00:56:09 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:09.280548 | orchestrator | 2026-03-10 00:56:09 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:09.280629 | orchestrator | 2026-03-10 00:56:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:12.332337 | orchestrator | 2026-03-10 00:56:12 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:12.335814 | orchestrator | 2026-03-10 00:56:12 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:12.336413 | orchestrator | 2026-03-10 00:56:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:15.375839 | orchestrator | 2026-03-10 00:56:15 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:15.378266 | orchestrator | 2026-03-10 00:56:15 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:15.378356 | orchestrator | 2026-03-10 00:56:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:18.435025 | orchestrator | 2026-03-10 00:56:18 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:18.438226 | orchestrator | 2026-03-10 00:56:18 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:18.438291 | orchestrator | 2026-03-10 00:56:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:21.493013 | orchestrator | 2026-03-10 00:56:21 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:21.493137 | orchestrator | 2026-03-10 00:56:21 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:21.493181 | orchestrator | 2026-03-10 00:56:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:24.538246 | orchestrator | 2026-03-10 00:56:24 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:24.543148 | orchestrator | 2026-03-10 00:56:24 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:24.543231 | orchestrator | 2026-03-10 00:56:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:27.592219 | orchestrator | 2026-03-10 00:56:27 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:27.593271 | orchestrator | 2026-03-10 00:56:27 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:27.593469 | orchestrator | 2026-03-10 00:56:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:30.647187 | orchestrator | 2026-03-10 00:56:30 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:30.650669 | orchestrator | 2026-03-10 00:56:30 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:30.650768 | orchestrator | 2026-03-10 00:56:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:33.726677 | orchestrator | 2026-03-10 00:56:33 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:33.726777 | orchestrator | 2026-03-10 00:56:33 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:33.726793 | orchestrator | 2026-03-10 00:56:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:36.769584 | orchestrator | 2026-03-10 00:56:36 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:36.770970 | orchestrator | 2026-03-10 00:56:36 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:36.771005 | orchestrator | 2026-03-10 00:56:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:39.813350 | orchestrator | 2026-03-10 00:56:39 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:39.818567 | orchestrator | 2026-03-10 00:56:39 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:39.818695 | orchestrator | 2026-03-10 00:56:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:42.870796 | orchestrator | 2026-03-10 00:56:42 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:42.873211 | orchestrator | 2026-03-10 00:56:42 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:42.873305 | orchestrator | 2026-03-10 00:56:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:45.924385 | orchestrator | 2026-03-10 00:56:45 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:45.925662 | orchestrator | 2026-03-10 00:56:45 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:45.925708 | orchestrator | 2026-03-10 00:56:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:48.976062 | orchestrator | 2026-03-10 00:56:48 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:48.976951 | orchestrator | 2026-03-10 00:56:48 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:48.977064 | orchestrator | 2026-03-10 00:56:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:52.067115 | orchestrator | 2026-03-10 00:56:52 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:52.067830 | orchestrator | 2026-03-10 00:56:52 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:52.067855 | orchestrator | 2026-03-10 00:56:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:55.122861 | orchestrator | 2026-03-10 00:56:55 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:55.124065 | orchestrator | 2026-03-10 00:56:55 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:55.124107 | orchestrator | 2026-03-10 00:56:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:56:58.189906 | orchestrator | 2026-03-10 00:56:58 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:56:58.190572 | orchestrator | 2026-03-10 00:56:58 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:56:58.190608 | orchestrator | 2026-03-10 00:56:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:01.249322 | orchestrator | 2026-03-10 00:57:01 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:01.253172 | orchestrator | 2026-03-10 00:57:01 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:01.253248 | orchestrator | 2026-03-10 00:57:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:04.291341 | orchestrator | 2026-03-10 00:57:04 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:04.292812 | orchestrator | 2026-03-10 00:57:04 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:04.292828 | orchestrator | 2026-03-10 00:57:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:07.337238 | orchestrator | 2026-03-10 00:57:07 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:07.338225 | orchestrator | 2026-03-10 00:57:07 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:07.338307 | orchestrator | 2026-03-10 00:57:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:10.369798 | orchestrator | 2026-03-10 00:57:10 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:10.369852 | orchestrator | 2026-03-10 00:57:10 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:10.369860 | orchestrator | 2026-03-10 00:57:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:13.408395 | orchestrator | 2026-03-10 00:57:13 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:13.410643 | orchestrator | 2026-03-10 00:57:13 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:13.410669 | orchestrator | 2026-03-10 00:57:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:16.464800 | orchestrator | 2026-03-10 00:57:16 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:16.466235 | orchestrator | 2026-03-10 00:57:16 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:16.466329 | orchestrator | 2026-03-10 00:57:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:19.510967 | orchestrator | 2026-03-10 00:57:19 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:19.513446 | orchestrator | 2026-03-10 00:57:19 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:19.513718 | orchestrator | 2026-03-10 00:57:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:22.575189 | orchestrator | 2026-03-10 00:57:22 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:22.577878 | orchestrator | 2026-03-10 00:57:22 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:22.577930 | orchestrator | 2026-03-10 00:57:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:25.625484 | orchestrator | 2026-03-10 00:57:25 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:25.628534 | orchestrator | 2026-03-10 00:57:25 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:25.629324 | orchestrator | 2026-03-10 00:57:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:28.676232 | orchestrator | 2026-03-10 00:57:28 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:28.678958 | orchestrator | 2026-03-10 00:57:28 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:28.679001 | orchestrator | 2026-03-10 00:57:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:31.722969 | orchestrator | 2026-03-10 00:57:31 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:31.724090 | orchestrator | 2026-03-10 00:57:31 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:31.724191 | orchestrator | 2026-03-10 00:57:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:34.768833 | orchestrator | 2026-03-10 00:57:34 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:34.772295 | orchestrator | 2026-03-10 00:57:34 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:34.772807 | orchestrator | 2026-03-10 00:57:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:37.826063 | orchestrator | 2026-03-10 00:57:37 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:37.826513 | orchestrator | 2026-03-10 00:57:37 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:37.826758 | orchestrator | 2026-03-10 00:57:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:40.869264 | orchestrator | 2026-03-10 00:57:40 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:40.870946 | orchestrator | 2026-03-10 00:57:40 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:40.871368 | orchestrator | 2026-03-10 00:57:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:43.920163 | orchestrator | 2026-03-10 00:57:43 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:43.921630 | orchestrator | 2026-03-10 00:57:43 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:43.921675 | orchestrator | 2026-03-10 00:57:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:46.968926 | orchestrator | 2026-03-10 00:57:46 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:46.970756 | orchestrator | 2026-03-10 00:57:46 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:46.970817 | orchestrator | 2026-03-10 00:57:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:50.019691 | orchestrator | 2026-03-10 00:57:50 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:50.021573 | orchestrator | 2026-03-10 00:57:50 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:50.023725 | orchestrator | 2026-03-10 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:53.070751 | orchestrator | 2026-03-10 00:57:53 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:53.071643 | orchestrator | 2026-03-10 00:57:53 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:53.071697 | orchestrator | 2026-03-10 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:56.131222 | orchestrator | 2026-03-10 00:57:56 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:56.133996 | orchestrator | 2026-03-10 00:57:56 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:56.134184 | orchestrator | 2026-03-10 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:57:59.200929 | orchestrator | 2026-03-10 00:57:59 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:57:59.204959 | orchestrator | 2026-03-10 00:57:59 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:57:59.205024 | orchestrator | 2026-03-10 00:57:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:02.248363 | orchestrator | 2026-03-10 00:58:02 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:02.248640 | orchestrator | 2026-03-10 00:58:02 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:02.248657 | orchestrator | 2026-03-10 00:58:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:05.308840 | orchestrator | 2026-03-10 00:58:05 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:05.308950 | orchestrator | 2026-03-10 00:58:05 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:05.308964 | orchestrator | 2026-03-10 00:58:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:08.342391 | orchestrator | 2026-03-10 00:58:08 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:08.343076 | orchestrator | 2026-03-10 00:58:08 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:08.343110 | orchestrator | 2026-03-10 00:58:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:11.384761 | orchestrator | 2026-03-10 00:58:11 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:11.390409 | orchestrator | 2026-03-10 00:58:11 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:11.390491 | orchestrator | 2026-03-10 00:58:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:14.426978 | orchestrator | 2026-03-10 00:58:14 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:14.427469 | orchestrator | 2026-03-10 00:58:14 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:14.427730 | orchestrator | 2026-03-10 00:58:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:17.491003 | orchestrator | 2026-03-10 00:58:17 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:17.492321 | orchestrator | 2026-03-10 00:58:17 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:17.492376 | orchestrator | 2026-03-10 00:58:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:20.539800 | orchestrator | 2026-03-10 00:58:20 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:20.541808 | orchestrator | 2026-03-10 00:58:20 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:20.541885 | orchestrator | 2026-03-10 00:58:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:23.585015 | orchestrator | 2026-03-10 00:58:23 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:23.586731 | orchestrator | 2026-03-10 00:58:23 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:23.586783 | orchestrator | 2026-03-10 00:58:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:26.628817 | orchestrator | 2026-03-10 00:58:26 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:26.628929 | orchestrator | 2026-03-10 00:58:26 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:26.628954 | orchestrator | 2026-03-10 00:58:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:29.672407 | orchestrator | 2026-03-10 00:58:29 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:29.673493 | orchestrator | 2026-03-10 00:58:29 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:29.673596 | orchestrator | 2026-03-10 00:58:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:32.716816 | orchestrator | 2026-03-10 00:58:32 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:32.718897 | orchestrator | 2026-03-10 00:58:32 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:32.718985 | orchestrator | 2026-03-10 00:58:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:35.762410 | orchestrator | 2026-03-10 00:58:35 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:35.764146 | orchestrator | 2026-03-10 00:58:35 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:35.764890 | orchestrator | 2026-03-10 00:58:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:38.839437 | orchestrator | 2026-03-10 00:58:38 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:38.841158 | orchestrator | 2026-03-10 00:58:38 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:38.842137 | orchestrator | 2026-03-10 00:58:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:41.884664 | orchestrator | 2026-03-10 00:58:41 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:41.885927 | orchestrator | 2026-03-10 00:58:41 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:41.886191 | orchestrator | 2026-03-10 00:58:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:44.920019 | orchestrator | 2026-03-10 00:58:44 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:44.922490 | orchestrator | 2026-03-10 00:58:44 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:44.922575 | orchestrator | 2026-03-10 00:58:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:47.960048 | orchestrator | 2026-03-10 00:58:47 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:47.962294 | orchestrator | 2026-03-10 00:58:47 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:47.962826 | orchestrator | 2026-03-10 00:58:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:51.016360 | orchestrator | 2026-03-10 00:58:51 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:51.018971 | orchestrator | 2026-03-10 00:58:51 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:51.019022 | orchestrator | 2026-03-10 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:54.057067 | orchestrator | 2026-03-10 00:58:54 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:54.057970 | orchestrator | 2026-03-10 00:58:54 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:54.058119 | orchestrator | 2026-03-10 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:58:57.106800 | orchestrator | 2026-03-10 00:58:57 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:58:57.107987 | orchestrator | 2026-03-10 00:58:57 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:58:57.108428 | orchestrator | 2026-03-10 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:00.151879 | orchestrator | 2026-03-10 00:59:00 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:00.152303 | orchestrator | 2026-03-10 00:59:00 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:59:00.152326 | orchestrator | 2026-03-10 00:59:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:03.210009 | orchestrator | 2026-03-10 00:59:03 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:03.210155 | orchestrator | 2026-03-10 00:59:03 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:59:03.210169 | orchestrator | 2026-03-10 00:59:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:06.239046 | orchestrator | 2026-03-10 00:59:06 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:06.239123 | orchestrator | 2026-03-10 00:59:06 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:59:06.239131 | orchestrator | 2026-03-10 00:59:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:09.297854 | orchestrator | 2026-03-10 00:59:09 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:09.307091 | orchestrator | 2026-03-10 00:59:09 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:59:09.307189 | orchestrator | 2026-03-10 00:59:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:12.355806 | orchestrator | 2026-03-10 00:59:12 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:12.356914 | orchestrator | 2026-03-10 00:59:12 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:59:12.356988 | orchestrator | 2026-03-10 00:59:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:15.404279 | orchestrator | 2026-03-10 00:59:15 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:15.406141 | orchestrator | 2026-03-10 00:59:15 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state STARTED 2026-03-10 00:59:15.406184 | orchestrator | 2026-03-10 00:59:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:18.451038 | orchestrator | 2026-03-10 00:59:18 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:18.459615 | orchestrator | 2026-03-10 00:59:18 | INFO  | Task e98eb0d3-682e-4d37-9648-941f5e7f5aad is in state SUCCESS 2026-03-10 00:59:18.461594 | orchestrator | 2026-03-10 00:59:18.461662 | orchestrator | 2026-03-10 00:59:18.461677 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 00:59:18.461689 | orchestrator | 2026-03-10 00:59:18.461701 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 00:59:18.461712 | orchestrator | Tuesday 10 March 2026 00:51:36 +0000 (0:00:00.501) 0:00:00.501 ********* 2026-03-10 00:59:18.461872 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.461888 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.461899 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.461910 | orchestrator | 2026-03-10 00:59:18.461921 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 00:59:18.461936 | orchestrator | Tuesday 10 March 2026 00:51:36 +0000 (0:00:00.420) 0:00:00.921 ********* 2026-03-10 00:59:18.461957 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-10 00:59:18.461976 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-10 00:59:18.461995 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-10 00:59:18.462078 | orchestrator | 2026-03-10 00:59:18.462105 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-10 00:59:18.462125 | orchestrator | 2026-03-10 00:59:18.462145 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-10 00:59:18.462177 | orchestrator | Tuesday 10 March 2026 00:51:37 +0000 (0:00:00.713) 0:00:01.635 ********* 2026-03-10 00:59:18.462190 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.462202 | orchestrator | 2026-03-10 00:59:18.462216 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-10 00:59:18.462235 | orchestrator | Tuesday 10 March 2026 00:51:38 +0000 (0:00:00.981) 0:00:02.616 ********* 2026-03-10 00:59:18.462263 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.462282 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.462299 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.462316 | orchestrator | 2026-03-10 00:59:18.462334 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-10 00:59:18.462394 | orchestrator | Tuesday 10 March 2026 00:51:40 +0000 (0:00:01.795) 0:00:04.411 ********* 2026-03-10 00:59:18.462414 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.462432 | orchestrator | 2026-03-10 00:59:18.462450 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-10 00:59:18.462469 | orchestrator | Tuesday 10 March 2026 00:51:41 +0000 (0:00:01.189) 0:00:05.602 ********* 2026-03-10 00:59:18.462519 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.462538 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.462557 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.462576 | orchestrator | 2026-03-10 00:59:18.462594 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-10 00:59:18.462611 | orchestrator | Tuesday 10 March 2026 00:51:42 +0000 (0:00:00.830) 0:00:06.432 ********* 2026-03-10 00:59:18.462630 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-10 00:59:18.462649 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-10 00:59:18.462668 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-10 00:59:18.462685 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-10 00:59:18.462697 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-10 00:59:18.462707 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-10 00:59:18.462718 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-10 00:59:18.462730 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-10 00:59:18.462760 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-10 00:59:18.462771 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-10 00:59:18.462789 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-10 00:59:18.462807 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-10 00:59:18.462826 | orchestrator | 2026-03-10 00:59:18.462844 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-10 00:59:18.462860 | orchestrator | Tuesday 10 March 2026 00:51:45 +0000 (0:00:03.360) 0:00:09.793 ********* 2026-03-10 00:59:18.462871 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-10 00:59:18.462882 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-10 00:59:18.462894 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-10 00:59:18.463151 | orchestrator | 2026-03-10 00:59:18.463179 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-10 00:59:18.463191 | orchestrator | Tuesday 10 March 2026 00:51:47 +0000 (0:00:01.355) 0:00:11.148 ********* 2026-03-10 00:59:18.463202 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-10 00:59:18.463213 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-10 00:59:18.463224 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-10 00:59:18.463234 | orchestrator | 2026-03-10 00:59:18.463245 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-10 00:59:18.463256 | orchestrator | Tuesday 10 March 2026 00:51:49 +0000 (0:00:02.833) 0:00:13.982 ********* 2026-03-10 00:59:18.463267 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-10 00:59:18.463278 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.463305 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-10 00:59:18.463317 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.463328 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-10 00:59:18.463338 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.463349 | orchestrator | 2026-03-10 00:59:18.463360 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-10 00:59:18.463371 | orchestrator | Tuesday 10 March 2026 00:51:52 +0000 (0:00:02.920) 0:00:16.903 ********* 2026-03-10 00:59:18.463396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.463433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.463446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.463470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.463528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:59:18.463540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.463564 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.463583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:59:18.463595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:59:18.463606 | orchestrator | 2026-03-10 00:59:18.463618 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-10 00:59:18.463637 | orchestrator | Tuesday 10 March 2026 00:51:55 +0000 (0:00:02.927) 0:00:19.830 ********* 2026-03-10 00:59:18.463648 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.463659 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.463669 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.463680 | orchestrator | 2026-03-10 00:59:18.463691 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-10 00:59:18.463702 | orchestrator | Tuesday 10 March 2026 00:51:57 +0000 (0:00:01.675) 0:00:21.505 ********* 2026-03-10 00:59:18.463712 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-10 00:59:18.463723 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-10 00:59:18.463734 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-10 00:59:18.463745 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-10 00:59:18.463756 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-10 00:59:18.463767 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-10 00:59:18.463878 | orchestrator | 2026-03-10 00:59:18.463891 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-10 00:59:18.463903 | orchestrator | Tuesday 10 March 2026 00:52:00 +0000 (0:00:03.539) 0:00:25.045 ********* 2026-03-10 00:59:18.463922 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.463941 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.463959 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.463979 | orchestrator | 2026-03-10 00:59:18.463999 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-10 00:59:18.464019 | orchestrator | Tuesday 10 March 2026 00:52:04 +0000 (0:00:03.096) 0:00:28.141 ********* 2026-03-10 00:59:18.464031 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.464042 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.464052 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.464063 | orchestrator | 2026-03-10 00:59:18.464074 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-10 00:59:18.464084 | orchestrator | Tuesday 10 March 2026 00:52:07 +0000 (0:00:03.196) 0:00:31.338 ********* 2026-03-10 00:59:18.464096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.464118 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.464130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.464158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__07bdd2a4d02068b9fb12abbc2a9032399eab53fc', '__omit_place_holder__07bdd2a4d02068b9fb12abbc2a9032399eab53fc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-10 00:59:18.464170 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.464182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.464193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.464205 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.464216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__07bdd2a4d02068b9fb12abbc2a9032399eab53fc', '__omit_place_holder__07bdd2a4d02068b9fb12abbc2a9032399eab53fc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-10 00:59:18.464228 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.464249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.464273 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.464284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.464296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__07bdd2a4d02068b9fb12abbc2a9032399eab53fc', '__omit_place_holder__07bdd2a4d02068b9fb12abbc2a9032399eab53fc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-10 00:59:18.464307 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.464318 | orchestrator | 2026-03-10 00:59:18.464329 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-10 00:59:18.464340 | orchestrator | Tuesday 10 March 2026 00:52:09 +0000 (0:00:02.256) 0:00:33.595 ********* 2026-03-10 00:59:18.464351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.464363 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.464393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.464418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.464430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.464441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__07bdd2a4d02068b9fb12abbc2a9032399eab53fc', '__omit_place_holder__07bdd2a4d02068b9fb12abbc2a9032399eab53fc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-10 00:59:18.464453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.464464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.464584 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__07bdd2a4d02068b9fb12abbc2a9032399eab53fc', '__omit_place_holder__07bdd2a4d02068b9fb12abbc2a9032399eab53fc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-10 00:59:18.464620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.464672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.464695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__07bdd2a4d02068b9fb12abbc2a9032399eab53fc', '__omit_place_holder__07bdd2a4d02068b9fb12abbc2a9032399eab53fc'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-10 00:59:18.464715 | orchestrator | 2026-03-10 00:59:18.464734 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-10 00:59:18.464753 | orchestrator | Tuesday 10 March 2026 00:52:14 +0000 (0:00:05.078) 0:00:38.673 ********* 2026-03-10 00:59:18.464772 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.464792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.464807 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.464827 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.464853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.464865 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.464877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:59:18.464888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:59:18.464900 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:59:18.464911 | orchestrator | 2026-03-10 00:59:18.464923 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-10 00:59:18.464933 | orchestrator | Tuesday 10 March 2026 00:52:18 +0000 (0:00:03.603) 0:00:42.276 ********* 2026-03-10 00:59:18.464944 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-10 00:59:18.464956 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-10 00:59:18.464974 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-10 00:59:18.464985 | orchestrator | 2026-03-10 00:59:18.464996 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-10 00:59:18.465006 | orchestrator | Tuesday 10 March 2026 00:52:22 +0000 (0:00:04.291) 0:00:46.568 ********* 2026-03-10 00:59:18.465017 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-10 00:59:18.465028 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-10 00:59:18.465039 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-10 00:59:18.465049 | orchestrator | 2026-03-10 00:59:18.465075 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-10 00:59:18.465095 | orchestrator | Tuesday 10 March 2026 00:52:29 +0000 (0:00:06.490) 0:00:53.059 ********* 2026-03-10 00:59:18.465110 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.465125 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.465139 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.465154 | orchestrator | 2026-03-10 00:59:18.465170 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-10 00:59:18.465186 | orchestrator | Tuesday 10 March 2026 00:52:30 +0000 (0:00:01.633) 0:00:54.692 ********* 2026-03-10 00:59:18.465203 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-10 00:59:18.465220 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-10 00:59:18.465236 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-10 00:59:18.465252 | orchestrator | 2026-03-10 00:59:18.465269 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-10 00:59:18.465295 | orchestrator | Tuesday 10 March 2026 00:52:35 +0000 (0:00:04.900) 0:00:59.593 ********* 2026-03-10 00:59:18.465311 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-10 00:59:18.465326 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-10 00:59:18.465337 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-10 00:59:18.465346 | orchestrator | 2026-03-10 00:59:18.465356 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-10 00:59:18.465365 | orchestrator | Tuesday 10 March 2026 00:52:39 +0000 (0:00:03.693) 0:01:03.287 ********* 2026-03-10 00:59:18.465375 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-10 00:59:18.465384 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-10 00:59:18.465394 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-10 00:59:18.465403 | orchestrator | 2026-03-10 00:59:18.465413 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-10 00:59:18.465422 | orchestrator | Tuesday 10 March 2026 00:52:41 +0000 (0:00:02.096) 0:01:05.383 ********* 2026-03-10 00:59:18.465432 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-10 00:59:18.465441 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-10 00:59:18.465450 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-10 00:59:18.465459 | orchestrator | 2026-03-10 00:59:18.465469 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-10 00:59:18.465502 | orchestrator | Tuesday 10 March 2026 00:52:44 +0000 (0:00:02.722) 0:01:08.105 ********* 2026-03-10 00:59:18.465512 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.465532 | orchestrator | 2026-03-10 00:59:18.465542 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-10 00:59:18.465551 | orchestrator | Tuesday 10 March 2026 00:52:45 +0000 (0:00:00.978) 0:01:09.084 ********* 2026-03-10 00:59:18.465562 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.465573 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.465591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.465607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.465618 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.465628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.465645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:59:18.465655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:59:18.465665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:59:18.465677 | orchestrator | 2026-03-10 00:59:18.465693 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-10 00:59:18.465709 | orchestrator | Tuesday 10 March 2026 00:52:48 +0000 (0:00:03.728) 0:01:12.812 ********* 2026-03-10 00:59:18.465733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.465755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.465771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.465786 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.465801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.465828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.465844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.465860 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.465876 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.465901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.465919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.465935 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.465952 | orchestrator | 2026-03-10 00:59:18.465969 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-10 00:59:18.465985 | orchestrator | Tuesday 10 March 2026 00:52:50 +0000 (0:00:01.835) 0:01:14.648 ********* 2026-03-10 00:59:18.466001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.466058 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.466104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.466116 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.466126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.466145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.466156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.466166 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.466182 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.466199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.466209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.466224 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.466241 | orchestrator | 2026-03-10 00:59:18.466256 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-10 00:59:18.466270 | orchestrator | Tuesday 10 March 2026 00:52:51 +0000 (0:00:01.120) 0:01:15.769 ********* 2026-03-10 00:59:18.466285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.466311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.466328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.466343 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.466372 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.466401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.466418 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.466436 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.466447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.466457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.466495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.466507 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.466516 | orchestrator | 2026-03-10 00:59:18.466526 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-10 00:59:18.466536 | orchestrator | Tuesday 10 March 2026 00:52:52 +0000 (0:00:01.132) 0:01:16.901 ********* 2026-03-10 00:59:18.466546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.466569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.466579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.466595 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.466612 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.466628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.466646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.466663 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.466690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.466727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.466741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.466752 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.466762 | orchestrator | 2026-03-10 00:59:18.466772 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-10 00:59:18.466781 | orchestrator | Tuesday 10 March 2026 00:52:53 +0000 (0:00:00.897) 0:01:17.798 ********* 2026-03-10 00:59:18.466792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.466802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.466812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.466823 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.467677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.467739 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.467768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.467786 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.467803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.467820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.467838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.467854 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.467868 | orchestrator | 2026-03-10 00:59:18.467881 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-10 00:59:18.467896 | orchestrator | Tuesday 10 March 2026 00:52:54 +0000 (0:00:01.012) 0:01:18.810 ********* 2026-03-10 00:59:18.467910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.467946 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.467968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.467982 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.467996 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.468010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.468023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.468037 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.468051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.468072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.468298 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.468319 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.468333 | orchestrator | 2026-03-10 00:59:18.468348 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-10 00:59:18.468364 | orchestrator | Tuesday 10 March 2026 00:52:56 +0000 (0:00:01.410) 0:01:20.221 ********* 2026-03-10 00:59:18.468386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.468403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.468418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.468433 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.468447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.468471 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.468522 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.468537 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.468558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.468574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.468588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.468601 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.468614 | orchestrator | 2026-03-10 00:59:18.468628 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-10 00:59:18.468702 | orchestrator | Tuesday 10 March 2026 00:52:56 +0000 (0:00:00.568) 0:01:20.789 ********* 2026-03-10 00:59:18.468720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.468746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.468762 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.468826 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.468855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.468884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.468900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.468915 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.468930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-10 00:59:18.468945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-10 00:59:18.468970 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-10 00:59:18.468985 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.469003 | orchestrator | 2026-03-10 00:59:18.469016 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-10 00:59:18.469028 | orchestrator | Tuesday 10 March 2026 00:52:57 +0000 (0:00:00.995) 0:01:21.784 ********* 2026-03-10 00:59:18.469042 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-10 00:59:18.469057 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-10 00:59:18.469079 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-10 00:59:18.469093 | orchestrator | 2026-03-10 00:59:18.469106 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-10 00:59:18.469119 | orchestrator | Tuesday 10 March 2026 00:52:59 +0000 (0:00:01.676) 0:01:23.461 ********* 2026-03-10 00:59:18.469133 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-10 00:59:18.469146 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-10 00:59:18.469159 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-10 00:59:18.469172 | orchestrator | 2026-03-10 00:59:18.469185 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-10 00:59:18.469199 | orchestrator | Tuesday 10 March 2026 00:53:01 +0000 (0:00:01.626) 0:01:25.087 ********* 2026-03-10 00:59:18.469214 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-10 00:59:18.469230 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-10 00:59:18.469250 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-10 00:59:18.469265 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-10 00:59:18.469279 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.469293 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-10 00:59:18.469306 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.469320 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-10 00:59:18.469333 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.469347 | orchestrator | 2026-03-10 00:59:18.469361 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-10 00:59:18.469374 | orchestrator | Tuesday 10 March 2026 00:53:02 +0000 (0:00:01.105) 0:01:26.193 ********* 2026-03-10 00:59:18.469389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.469417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.469432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-10 00:59:18.469458 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.469497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.469519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-10 00:59:18.469533 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:59:18.469555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:59:18.469700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-10 00:59:18.469718 | orchestrator | 2026-03-10 00:59:18.469733 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-10 00:59:18.469747 | orchestrator | Tuesday 10 March 2026 00:53:05 +0000 (0:00:03.399) 0:01:29.592 ********* 2026-03-10 00:59:18.469760 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.469774 | orchestrator | 2026-03-10 00:59:18.469789 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-10 00:59:18.469802 | orchestrator | Tuesday 10 March 2026 00:53:06 +0000 (0:00:00.775) 0:01:30.367 ********* 2026-03-10 00:59:18.469818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-10 00:59:18.469837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.469852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.469861 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-10 00:59:18.469876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.469885 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.469893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.469908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.469922 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-10 00:59:18.469935 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.469944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.469952 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.469960 | orchestrator | 2026-03-10 00:59:18.469968 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-10 00:59:18.469977 | orchestrator | Tuesday 10 March 2026 00:53:11 +0000 (0:00:05.419) 0:01:35.786 ********* 2026-03-10 00:59:18.469990 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-10 00:59:18.470011 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-10 00:59:18.470104 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.470129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.470143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.470156 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.470168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.470181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.470194 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.470208 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.470231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-10 00:59:18.470262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.470276 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.470290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.470304 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.470317 | orchestrator | 2026-03-10 00:59:18.470330 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-10 00:59:18.470447 | orchestrator | Tuesday 10 March 2026 00:53:13 +0000 (0:00:01.569) 0:01:37.356 ********* 2026-03-10 00:59:18.470467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-10 00:59:18.470611 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-10 00:59:18.470623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-10 00:59:18.470637 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.470651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-10 00:59:18.470664 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.470677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-10 00:59:18.470690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-10 00:59:18.470704 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.470718 | orchestrator | 2026-03-10 00:59:18.470740 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-10 00:59:18.470766 | orchestrator | Tuesday 10 March 2026 00:53:14 +0000 (0:00:01.147) 0:01:38.504 ********* 2026-03-10 00:59:18.470779 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.470792 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.470805 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.470818 | orchestrator | 2026-03-10 00:59:18.470832 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-10 00:59:18.470845 | orchestrator | Tuesday 10 March 2026 00:53:15 +0000 (0:00:01.503) 0:01:40.008 ********* 2026-03-10 00:59:18.470860 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.470873 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.470885 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.470898 | orchestrator | 2026-03-10 00:59:18.470911 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-10 00:59:18.470924 | orchestrator | Tuesday 10 March 2026 00:53:18 +0000 (0:00:02.237) 0:01:42.245 ********* 2026-03-10 00:59:18.470937 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.470950 | orchestrator | 2026-03-10 00:59:18.470962 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-10 00:59:18.470989 | orchestrator | Tuesday 10 March 2026 00:53:19 +0000 (0:00:01.525) 0:01:43.770 ********* 2026-03-10 00:59:18.471005 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.471021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.471035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.471050 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.471077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.471094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.471121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.471133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.471145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.471156 | orchestrator | 2026-03-10 00:59:18.471177 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-10 00:59:18.471187 | orchestrator | Tuesday 10 March 2026 00:53:24 +0000 (0:00:05.178) 0:01:48.949 ********* 2026-03-10 00:59:18.471207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.471226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.471238 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.471250 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.471262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.471275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.471300 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.471312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.471329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.471342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.471354 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.471366 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.471377 | orchestrator | 2026-03-10 00:59:18.471388 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-10 00:59:18.471399 | orchestrator | Tuesday 10 March 2026 00:53:25 +0000 (0:00:00.716) 0:01:49.666 ********* 2026-03-10 00:59:18.471411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-10 00:59:18.471423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-10 00:59:18.471437 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.471448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-10 00:59:18.471467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-10 00:59:18.471504 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.471515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-10 00:59:18.471528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-10 00:59:18.471540 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.471587 | orchestrator | 2026-03-10 00:59:18.471600 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-10 00:59:18.471612 | orchestrator | Tuesday 10 March 2026 00:53:26 +0000 (0:00:01.145) 0:01:50.812 ********* 2026-03-10 00:59:18.471624 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.471636 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.471648 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.471659 | orchestrator | 2026-03-10 00:59:18.471670 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-10 00:59:18.471681 | orchestrator | Tuesday 10 March 2026 00:53:28 +0000 (0:00:01.470) 0:01:52.282 ********* 2026-03-10 00:59:18.471692 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.471704 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.471716 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.471727 | orchestrator | 2026-03-10 00:59:18.471744 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-10 00:59:18.471766 | orchestrator | Tuesday 10 March 2026 00:53:30 +0000 (0:00:02.354) 0:01:54.637 ********* 2026-03-10 00:59:18.471777 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.471789 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.471801 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.471812 | orchestrator | 2026-03-10 00:59:18.471963 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-10 00:59:18.471979 | orchestrator | Tuesday 10 March 2026 00:53:30 +0000 (0:00:00.350) 0:01:54.987 ********* 2026-03-10 00:59:18.471992 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.472004 | orchestrator | 2026-03-10 00:59:18.472016 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-10 00:59:18.472028 | orchestrator | Tuesday 10 March 2026 00:53:31 +0000 (0:00:01.058) 0:01:56.046 ********* 2026-03-10 00:59:18.472075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-10 00:59:18.472092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-10 00:59:18.472116 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-10 00:59:18.472128 | orchestrator | 2026-03-10 00:59:18.472140 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-10 00:59:18.472152 | orchestrator | Tuesday 10 March 2026 00:53:36 +0000 (0:00:04.607) 0:02:00.653 ********* 2026-03-10 00:59:18.472173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-10 00:59:18.472185 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.472203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-10 00:59:18.472215 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.472227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-10 00:59:18.472247 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.472259 | orchestrator | 2026-03-10 00:59:18.472269 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-10 00:59:18.472280 | orchestrator | Tuesday 10 March 2026 00:53:38 +0000 (0:00:02.218) 0:02:02.871 ********* 2026-03-10 00:59:18.472293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-10 00:59:18.472308 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-10 00:59:18.472323 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.472335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-10 00:59:18.472348 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-10 00:59:18.472360 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.472379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-10 00:59:18.472393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-10 00:59:18.472405 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.472417 | orchestrator | 2026-03-10 00:59:18.472430 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-10 00:59:18.472470 | orchestrator | Tuesday 10 March 2026 00:53:41 +0000 (0:00:03.136) 0:02:06.008 ********* 2026-03-10 00:59:18.472502 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.472514 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.472526 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.472537 | orchestrator | 2026-03-10 00:59:18.472556 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-10 00:59:18.472576 | orchestrator | Tuesday 10 March 2026 00:53:43 +0000 (0:00:01.286) 0:02:07.294 ********* 2026-03-10 00:59:18.472584 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.472590 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.472597 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.472603 | orchestrator | 2026-03-10 00:59:18.472609 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-10 00:59:18.472616 | orchestrator | Tuesday 10 March 2026 00:53:45 +0000 (0:00:01.768) 0:02:09.063 ********* 2026-03-10 00:59:18.472623 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.472629 | orchestrator | 2026-03-10 00:59:18.472636 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-10 00:59:18.472642 | orchestrator | Tuesday 10 March 2026 00:53:46 +0000 (0:00:01.245) 0:02:10.309 ********* 2026-03-10 00:59:18.472650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.472658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.472666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.472744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472759 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472774 | orchestrator | 2026-03-10 00:59:18.472781 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-10 00:59:18.472787 | orchestrator | Tuesday 10 March 2026 00:53:53 +0000 (0:00:07.377) 0:02:17.687 ********* 2026-03-10 00:59:18.472794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.472801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472830 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472837 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.472844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.472851 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472858 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472873 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.472884 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.472899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472913 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.472920 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.472927 | orchestrator | 2026-03-10 00:59:18.472934 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-10 00:59:18.472941 | orchestrator | Tuesday 10 March 2026 00:53:55 +0000 (0:00:01.563) 0:02:19.251 ********* 2026-03-10 00:59:18.472948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-10 00:59:18.472955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-10 00:59:18.472962 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.473068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-10 00:59:18.473077 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-10 00:59:18.473089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-10 00:59:18.473096 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.473108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-10 00:59:18.473115 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.473122 | orchestrator | 2026-03-10 00:59:18.473129 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-10 00:59:18.473135 | orchestrator | Tuesday 10 March 2026 00:53:56 +0000 (0:00:01.529) 0:02:20.780 ********* 2026-03-10 00:59:18.473142 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.473149 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.473155 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.473162 | orchestrator | 2026-03-10 00:59:18.473168 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-10 00:59:18.473175 | orchestrator | Tuesday 10 March 2026 00:53:58 +0000 (0:00:01.697) 0:02:22.478 ********* 2026-03-10 00:59:18.473182 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.473188 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.473195 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.473201 | orchestrator | 2026-03-10 00:59:18.473208 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-10 00:59:18.473220 | orchestrator | Tuesday 10 March 2026 00:54:00 +0000 (0:00:02.276) 0:02:24.754 ********* 2026-03-10 00:59:18.473227 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.473234 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.473240 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.473247 | orchestrator | 2026-03-10 00:59:18.473253 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-10 00:59:18.473260 | orchestrator | Tuesday 10 March 2026 00:54:01 +0000 (0:00:00.485) 0:02:25.239 ********* 2026-03-10 00:59:18.473267 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.473273 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.473280 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.473286 | orchestrator | 2026-03-10 00:59:18.473293 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-10 00:59:18.473300 | orchestrator | Tuesday 10 March 2026 00:54:01 +0000 (0:00:00.363) 0:02:25.602 ********* 2026-03-10 00:59:18.473307 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.473313 | orchestrator | 2026-03-10 00:59:18.473320 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-10 00:59:18.473326 | orchestrator | Tuesday 10 March 2026 00:54:02 +0000 (0:00:00.950) 0:02:26.553 ********* 2026-03-10 00:59:18.473333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 00:59:18.473347 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 00:59:18.473354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473367 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473379 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 00:59:18.473412 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 00:59:18.473425 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 00:59:18.473436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 00:59:18.473443 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473526 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473562 | orchestrator | 2026-03-10 00:59:18.473569 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-10 00:59:18.473576 | orchestrator | Tuesday 10 March 2026 00:54:08 +0000 (0:00:06.015) 0:02:32.569 ********* 2026-03-10 00:59:18.473583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 00:59:18.473595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 00:59:18.473606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473640 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473647 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.473658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 00:59:18.473666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 00:59:18.473676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473703 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473717 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.473729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 00:59:18.473741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 00:59:18.473748 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.473794 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.473800 | orchestrator | 2026-03-10 00:59:18.473807 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-10 00:59:18.473814 | orchestrator | Tuesday 10 March 2026 00:54:10 +0000 (0:00:01.808) 0:02:34.377 ********* 2026-03-10 00:59:18.473822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-10 00:59:18.473829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-10 00:59:18.473837 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.473847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-10 00:59:18.473859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-10 00:59:18.473866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-10 00:59:18.473872 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.473879 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-10 00:59:18.473886 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.473892 | orchestrator | 2026-03-10 00:59:18.473899 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-10 00:59:18.473906 | orchestrator | Tuesday 10 March 2026 00:54:12 +0000 (0:00:02.179) 0:02:36.557 ********* 2026-03-10 00:59:18.473913 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.473919 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.473927 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.473938 | orchestrator | 2026-03-10 00:59:18.473952 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-10 00:59:18.473967 | orchestrator | Tuesday 10 March 2026 00:54:15 +0000 (0:00:02.831) 0:02:39.388 ********* 2026-03-10 00:59:18.473977 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.473987 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.473996 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.474006 | orchestrator | 2026-03-10 00:59:18.474162 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-10 00:59:18.474178 | orchestrator | Tuesday 10 March 2026 00:54:17 +0000 (0:00:02.527) 0:02:41.916 ********* 2026-03-10 00:59:18.474186 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.474192 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.474199 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.474205 | orchestrator | 2026-03-10 00:59:18.474212 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-10 00:59:18.474222 | orchestrator | Tuesday 10 March 2026 00:54:18 +0000 (0:00:00.758) 0:02:42.675 ********* 2026-03-10 00:59:18.474234 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.474244 | orchestrator | 2026-03-10 00:59:18.474254 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-10 00:59:18.474264 | orchestrator | Tuesday 10 March 2026 00:54:19 +0000 (0:00:00.903) 0:02:43.578 ********* 2026-03-10 00:59:18.474289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 00:59:18.474320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-10 00:59:18.474333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 00:59:18.474361 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-10 00:59:18.474383 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 00:59:18.474420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-10 00:59:18.474441 | orchestrator | 2026-03-10 00:59:18.474453 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-10 00:59:18.474464 | orchestrator | Tuesday 10 March 2026 00:54:25 +0000 (0:00:05.850) 0:02:49.428 ********* 2026-03-10 00:59:18.474492 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 00:59:18.474514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-10 00:59:18.474527 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.474535 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 00:59:18.474548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-10 00:59:18.474563 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.474574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 00:59:18.474587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-10 00:59:18.474599 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.474606 | orchestrator | 2026-03-10 00:59:18.474613 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-10 00:59:18.474620 | orchestrator | Tuesday 10 March 2026 00:54:29 +0000 (0:00:04.348) 0:02:53.777 ********* 2026-03-10 00:59:18.474627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-10 00:59:18.474637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-10 00:59:18.474645 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.474652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-10 00:59:18.474659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-10 00:59:18.474666 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.474673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-10 00:59:18.474680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-10 00:59:18.474692 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.474699 | orchestrator | 2026-03-10 00:59:18.474706 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-10 00:59:18.474713 | orchestrator | Tuesday 10 March 2026 00:54:34 +0000 (0:00:04.474) 0:02:58.252 ********* 2026-03-10 00:59:18.474720 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.474726 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.474733 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.474740 | orchestrator | 2026-03-10 00:59:18.474747 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-10 00:59:18.474753 | orchestrator | Tuesday 10 March 2026 00:54:35 +0000 (0:00:01.588) 0:02:59.840 ********* 2026-03-10 00:59:18.474760 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.474768 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.474779 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.474790 | orchestrator | 2026-03-10 00:59:18.474801 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-10 00:59:18.474817 | orchestrator | Tuesday 10 March 2026 00:54:38 +0000 (0:00:02.590) 0:03:02.430 ********* 2026-03-10 00:59:18.474828 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.474839 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.474851 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.474862 | orchestrator | 2026-03-10 00:59:18.474873 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-10 00:59:18.474884 | orchestrator | Tuesday 10 March 2026 00:54:39 +0000 (0:00:00.709) 0:03:03.140 ********* 2026-03-10 00:59:18.474891 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.474898 | orchestrator | 2026-03-10 00:59:18.474905 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-10 00:59:18.474911 | orchestrator | Tuesday 10 March 2026 00:54:40 +0000 (0:00:00.969) 0:03:04.109 ********* 2026-03-10 00:59:18.474923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 00:59:18.474931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 00:59:18.474995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 00:59:18.475011 | orchestrator | 2026-03-10 00:59:18.475018 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-10 00:59:18.475024 | orchestrator | Tuesday 10 March 2026 00:54:44 +0000 (0:00:04.268) 0:03:08.378 ********* 2026-03-10 00:59:18.475053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 00:59:18.475066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 00:59:18.475079 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.475090 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.475100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 00:59:18.475110 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.475180 | orchestrator | 2026-03-10 00:59:18.475191 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-10 00:59:18.475198 | orchestrator | Tuesday 10 March 2026 00:54:45 +0000 (0:00:00.756) 0:03:09.134 ********* 2026-03-10 00:59:18.475205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-10 00:59:18.475213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-10 00:59:18.475221 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.475248 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-10 00:59:18.475256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-10 00:59:18.475262 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.475269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-10 00:59:18.475282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-10 00:59:18.475289 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.475296 | orchestrator | 2026-03-10 00:59:18.475302 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-10 00:59:18.475309 | orchestrator | Tuesday 10 March 2026 00:54:45 +0000 (0:00:00.781) 0:03:09.916 ********* 2026-03-10 00:59:18.475315 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.475322 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.475328 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.475335 | orchestrator | 2026-03-10 00:59:18.475342 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-10 00:59:18.475348 | orchestrator | Tuesday 10 March 2026 00:54:47 +0000 (0:00:01.405) 0:03:11.321 ********* 2026-03-10 00:59:18.475355 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.475361 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.475368 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.475374 | orchestrator | 2026-03-10 00:59:18.475381 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-10 00:59:18.475387 | orchestrator | Tuesday 10 March 2026 00:54:49 +0000 (0:00:02.724) 0:03:14.046 ********* 2026-03-10 00:59:18.475394 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.475400 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.475407 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.475414 | orchestrator | 2026-03-10 00:59:18.475420 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-10 00:59:18.475427 | orchestrator | Tuesday 10 March 2026 00:54:50 +0000 (0:00:00.651) 0:03:14.697 ********* 2026-03-10 00:59:18.475433 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.475440 | orchestrator | 2026-03-10 00:59:18.475446 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-10 00:59:18.475453 | orchestrator | Tuesday 10 March 2026 00:54:51 +0000 (0:00:01.005) 0:03:15.702 ********* 2026-03-10 00:59:18.475519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 00:59:18.475537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 00:59:18.475558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 00:59:18.475571 | orchestrator | 2026-03-10 00:59:18.475578 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-10 00:59:18.475599 | orchestrator | Tuesday 10 March 2026 00:54:55 +0000 (0:00:03.905) 0:03:19.608 ********* 2026-03-10 00:59:18.475612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 00:59:18.475620 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.475634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 00:59:18.475646 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.475659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 00:59:18.475667 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.475673 | orchestrator | 2026-03-10 00:59:18.475680 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-10 00:59:18.475716 | orchestrator | Tuesday 10 March 2026 00:54:56 +0000 (0:00:01.337) 0:03:20.946 ********* 2026-03-10 00:59:18.475724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-10 00:59:18.475749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-10 00:59:18.475759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-10 00:59:18.475767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-10 00:59:18.475774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-10 00:59:18.475814 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.475825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-10 00:59:18.475838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-10 00:59:18.475848 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-10 00:59:18.475860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-10 00:59:18.475871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-10 00:59:18.475882 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.475894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-10 00:59:18.475913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-10 00:59:18.475926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-10 00:59:18.475950 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-10 00:59:18.475968 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-10 00:59:18.475975 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.475982 | orchestrator | 2026-03-10 00:59:18.475989 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-10 00:59:18.475996 | orchestrator | Tuesday 10 March 2026 00:54:58 +0000 (0:00:01.488) 0:03:22.434 ********* 2026-03-10 00:59:18.476003 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.476009 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.476016 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.476022 | orchestrator | 2026-03-10 00:59:18.476029 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-10 00:59:18.476036 | orchestrator | Tuesday 10 March 2026 00:54:59 +0000 (0:00:01.375) 0:03:23.810 ********* 2026-03-10 00:59:18.476042 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.476049 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.476055 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.476062 | orchestrator | 2026-03-10 00:59:18.476069 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-10 00:59:18.476122 | orchestrator | Tuesday 10 March 2026 00:55:02 +0000 (0:00:02.626) 0:03:26.436 ********* 2026-03-10 00:59:18.476130 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.476136 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.476142 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.476148 | orchestrator | 2026-03-10 00:59:18.476154 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-10 00:59:18.476160 | orchestrator | Tuesday 10 March 2026 00:55:02 +0000 (0:00:00.381) 0:03:26.818 ********* 2026-03-10 00:59:18.476166 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.476173 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.476179 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.476185 | orchestrator | 2026-03-10 00:59:18.476191 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-10 00:59:18.476197 | orchestrator | Tuesday 10 March 2026 00:55:03 +0000 (0:00:00.700) 0:03:27.518 ********* 2026-03-10 00:59:18.476203 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.476218 | orchestrator | 2026-03-10 00:59:18.476225 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-10 00:59:18.476231 | orchestrator | Tuesday 10 March 2026 00:55:04 +0000 (0:00:01.151) 0:03:28.669 ********* 2026-03-10 00:59:18.476238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 00:59:18.476257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 00:59:18.476264 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 00:59:18.476275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 00:59:18.476282 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 00:59:18.476289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 00:59:18.476295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 00:59:18.476318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 00:59:18.476339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 00:59:18.476346 | orchestrator | 2026-03-10 00:59:18.476352 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-10 00:59:18.476382 | orchestrator | Tuesday 10 March 2026 00:55:10 +0000 (0:00:05.943) 0:03:34.612 ********* 2026-03-10 00:59:18.476389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 00:59:18.476408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 00:59:18.476415 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 00:59:18.476426 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.476437 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 00:59:18.476447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 00:59:18.476454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 00:59:18.476461 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.476488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 00:59:18.476500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 00:59:18.476518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 00:59:18.476529 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.476540 | orchestrator | 2026-03-10 00:59:18.476550 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-10 00:59:18.476566 | orchestrator | Tuesday 10 March 2026 00:55:12 +0000 (0:00:01.506) 0:03:36.119 ********* 2026-03-10 00:59:18.476574 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-10 00:59:18.476582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-10 00:59:18.476588 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.476594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-10 00:59:18.476605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-10 00:59:18.476611 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.476618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-10 00:59:18.476624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-10 00:59:18.476630 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.476636 | orchestrator | 2026-03-10 00:59:18.476642 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-10 00:59:18.476649 | orchestrator | Tuesday 10 March 2026 00:55:13 +0000 (0:00:01.614) 0:03:37.733 ********* 2026-03-10 00:59:18.476655 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.476661 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.476667 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.476673 | orchestrator | 2026-03-10 00:59:18.476679 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-10 00:59:18.476685 | orchestrator | Tuesday 10 March 2026 00:55:15 +0000 (0:00:01.604) 0:03:39.338 ********* 2026-03-10 00:59:18.476696 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.476702 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.476708 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.476714 | orchestrator | 2026-03-10 00:59:18.476720 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-10 00:59:18.476727 | orchestrator | Tuesday 10 March 2026 00:55:18 +0000 (0:00:03.116) 0:03:42.456 ********* 2026-03-10 00:59:18.476733 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.476739 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.476745 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.476751 | orchestrator | 2026-03-10 00:59:18.476757 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-10 00:59:18.476763 | orchestrator | Tuesday 10 March 2026 00:55:19 +0000 (0:00:00.665) 0:03:43.121 ********* 2026-03-10 00:59:18.476769 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.476775 | orchestrator | 2026-03-10 00:59:18.476781 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-10 00:59:18.476788 | orchestrator | Tuesday 10 March 2026 00:55:20 +0000 (0:00:01.011) 0:03:44.132 ********* 2026-03-10 00:59:18.476795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 00:59:18.476806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.476820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 00:59:18.476827 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.476838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 00:59:18.476845 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.476852 | orchestrator | 2026-03-10 00:59:18.476858 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-10 00:59:18.476864 | orchestrator | Tuesday 10 March 2026 00:55:24 +0000 (0:00:04.452) 0:03:48.585 ********* 2026-03-10 00:59:18.476875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 00:59:18.476886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.476897 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.476903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 00:59:18.476910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.476917 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.476927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 00:59:18.476934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.476941 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.476947 | orchestrator | 2026-03-10 00:59:18.476953 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-10 00:59:18.476963 | orchestrator | Tuesday 10 March 2026 00:55:25 +0000 (0:00:01.074) 0:03:49.659 ********* 2026-03-10 00:59:18.476970 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-10 00:59:18.476981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-10 00:59:18.476988 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.476994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-10 00:59:18.477000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-10 00:59:18.477006 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.477013 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-10 00:59:18.477019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-10 00:59:18.477025 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.477031 | orchestrator | 2026-03-10 00:59:18.477037 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-10 00:59:18.477043 | orchestrator | Tuesday 10 March 2026 00:55:26 +0000 (0:00:00.985) 0:03:50.645 ********* 2026-03-10 00:59:18.477049 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.477055 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.477061 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.477067 | orchestrator | 2026-03-10 00:59:18.477073 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-10 00:59:18.477079 | orchestrator | Tuesday 10 March 2026 00:55:27 +0000 (0:00:01.408) 0:03:52.054 ********* 2026-03-10 00:59:18.477086 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.477092 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.477098 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.477104 | orchestrator | 2026-03-10 00:59:18.477110 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-10 00:59:18.477116 | orchestrator | Tuesday 10 March 2026 00:55:30 +0000 (0:00:02.311) 0:03:54.366 ********* 2026-03-10 00:59:18.477122 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.477128 | orchestrator | 2026-03-10 00:59:18.477134 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-10 00:59:18.477140 | orchestrator | Tuesday 10 March 2026 00:55:31 +0000 (0:00:01.402) 0:03:55.768 ********* 2026-03-10 00:59:18.477147 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-10 00:59:18.477158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477172 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-10 00:59:18.477186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-10 00:59:18.477230 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477236 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477249 | orchestrator | 2026-03-10 00:59:18.477255 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-10 00:59:18.477262 | orchestrator | Tuesday 10 March 2026 00:55:35 +0000 (0:00:04.005) 0:03:59.773 ********* 2026-03-10 00:59:18.477272 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-10 00:59:18.477283 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477305 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.477312 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-10 00:59:18.477318 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477325 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477339 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477345 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.477355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-10 00:59:18.477361 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.477381 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.477387 | orchestrator | 2026-03-10 00:59:18.477393 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-10 00:59:18.477399 | orchestrator | Tuesday 10 March 2026 00:55:36 +0000 (0:00:00.989) 0:04:00.762 ********* 2026-03-10 00:59:18.477409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-10 00:59:18.477416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-10 00:59:18.477422 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.477428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-10 00:59:18.477438 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-10 00:59:18.477445 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.477451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-10 00:59:18.477457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-10 00:59:18.477463 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.477469 | orchestrator | 2026-03-10 00:59:18.477499 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-10 00:59:18.477510 | orchestrator | Tuesday 10 March 2026 00:55:38 +0000 (0:00:01.329) 0:04:02.092 ********* 2026-03-10 00:59:18.477519 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.477530 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.477548 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.477557 | orchestrator | 2026-03-10 00:59:18.477563 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-10 00:59:18.477569 | orchestrator | Tuesday 10 March 2026 00:55:39 +0000 (0:00:01.427) 0:04:03.520 ********* 2026-03-10 00:59:18.477576 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.477582 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.477588 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.477594 | orchestrator | 2026-03-10 00:59:18.477600 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-10 00:59:18.477606 | orchestrator | Tuesday 10 March 2026 00:55:41 +0000 (0:00:02.482) 0:04:06.002 ********* 2026-03-10 00:59:18.477612 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.477618 | orchestrator | 2026-03-10 00:59:18.477625 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-10 00:59:18.477631 | orchestrator | Tuesday 10 March 2026 00:55:43 +0000 (0:00:01.563) 0:04:07.566 ********* 2026-03-10 00:59:18.477637 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-10 00:59:18.477643 | orchestrator | 2026-03-10 00:59:18.477649 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-10 00:59:18.477655 | orchestrator | Tuesday 10 March 2026 00:55:46 +0000 (0:00:03.023) 0:04:10.590 ********* 2026-03-10 00:59:18.477662 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:18.477681 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-10 00:59:18.477688 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.477698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:18.477705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-10 00:59:18.477716 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.477727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:18.477737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-10 00:59:18.477744 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.477750 | orchestrator | 2026-03-10 00:59:18.477757 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-10 00:59:18.477763 | orchestrator | Tuesday 10 March 2026 00:55:49 +0000 (0:00:02.527) 0:04:13.118 ********* 2026-03-10 00:59:18.477769 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:18.477781 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-10 00:59:18.477788 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.477814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:18.477821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-10 00:59:18.477832 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.477839 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 00:59:18.477850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-10 00:59:18.477857 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.477863 | orchestrator | 2026-03-10 00:59:18.477869 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-10 00:59:18.477876 | orchestrator | Tuesday 10 March 2026 00:55:51 +0000 (0:00:02.536) 0:04:15.654 ********* 2026-03-10 00:59:18.477885 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-10 00:59:18.477892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-10 00:59:18.477904 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.477910 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-10 00:59:18.477917 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-10 00:59:18.477923 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.477930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-10 00:59:18.477940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-10 00:59:18.477947 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.477953 | orchestrator | 2026-03-10 00:59:18.477959 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-10 00:59:18.477966 | orchestrator | Tuesday 10 March 2026 00:55:54 +0000 (0:00:03.160) 0:04:18.815 ********* 2026-03-10 00:59:18.477972 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.477978 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.477984 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.477990 | orchestrator | 2026-03-10 00:59:18.477997 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-10 00:59:18.478003 | orchestrator | Tuesday 10 March 2026 00:55:56 +0000 (0:00:02.004) 0:04:20.819 ********* 2026-03-10 00:59:18.478009 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.478154 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.478162 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.478168 | orchestrator | 2026-03-10 00:59:18.478174 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-10 00:59:18.478181 | orchestrator | Tuesday 10 March 2026 00:55:58 +0000 (0:00:01.751) 0:04:22.571 ********* 2026-03-10 00:59:18.478187 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.478193 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.478203 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.478210 | orchestrator | 2026-03-10 00:59:18.478217 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-10 00:59:18.478235 | orchestrator | Tuesday 10 March 2026 00:55:58 +0000 (0:00:00.361) 0:04:22.932 ********* 2026-03-10 00:59:18.478245 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.478253 | orchestrator | 2026-03-10 00:59:18.478262 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-10 00:59:18.478295 | orchestrator | Tuesday 10 March 2026 00:56:00 +0000 (0:00:01.467) 0:04:24.400 ********* 2026-03-10 00:59:18.478306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-10 00:59:18.478318 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-10 00:59:18.478330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-10 00:59:18.478340 | orchestrator | 2026-03-10 00:59:18.478351 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-10 00:59:18.478360 | orchestrator | Tuesday 10 March 2026 00:56:01 +0000 (0:00:01.583) 0:04:25.984 ********* 2026-03-10 00:59:18.478384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-10 00:59:18.478403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-10 00:59:18.478421 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.478432 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.478441 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-10 00:59:18.478448 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.478454 | orchestrator | 2026-03-10 00:59:18.478460 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-10 00:59:18.478466 | orchestrator | Tuesday 10 March 2026 00:56:02 +0000 (0:00:00.469) 0:04:26.453 ********* 2026-03-10 00:59:18.478491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-10 00:59:18.478500 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-10 00:59:18.478506 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.478512 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.478519 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-10 00:59:18.478525 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.478531 | orchestrator | 2026-03-10 00:59:18.478537 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-10 00:59:18.478543 | orchestrator | Tuesday 10 March 2026 00:56:03 +0000 (0:00:00.993) 0:04:27.447 ********* 2026-03-10 00:59:18.478549 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.478555 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.478561 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.478567 | orchestrator | 2026-03-10 00:59:18.478573 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-10 00:59:18.478579 | orchestrator | Tuesday 10 March 2026 00:56:03 +0000 (0:00:00.514) 0:04:27.962 ********* 2026-03-10 00:59:18.478585 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.478591 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.478597 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.478603 | orchestrator | 2026-03-10 00:59:18.478609 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-10 00:59:18.478616 | orchestrator | Tuesday 10 March 2026 00:56:05 +0000 (0:00:01.549) 0:04:29.512 ********* 2026-03-10 00:59:18.478626 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.478632 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.478638 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.478644 | orchestrator | 2026-03-10 00:59:18.478651 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-10 00:59:18.478661 | orchestrator | Tuesday 10 March 2026 00:56:05 +0000 (0:00:00.383) 0:04:29.895 ********* 2026-03-10 00:59:18.478668 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.478674 | orchestrator | 2026-03-10 00:59:18.478680 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-10 00:59:18.478686 | orchestrator | Tuesday 10 March 2026 00:56:07 +0000 (0:00:01.494) 0:04:31.389 ********* 2026-03-10 00:59:18.478693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 00:59:18.478701 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478733 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-10 00:59:18.478765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.478786 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.478795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478802 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:59:18.478810 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478826 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 00:59:18.478837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-10 00:59:18.478844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.478851 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-10 00:59:18.478894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:59:18.478909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 00:59:18.478916 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-10 00:59:18.478933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.478967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.478979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.478991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-10 00:59:18.478999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:59:18.479025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479032 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:59:18.479100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479112 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-10 00:59:18.479120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:59:18.479138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-10 00:59:18.479167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:59:18.479174 | orchestrator | 2026-03-10 00:59:18.479180 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-10 00:59:18.479186 | orchestrator | Tuesday 10 March 2026 00:56:11 +0000 (0:00:04.539) 0:04:35.929 ********* 2026-03-10 00:59:18.479196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 00:59:18.479203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 00:59:18.479231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-10 00:59:18.479254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-10 00:59:18.479297 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479304 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:59:18.479337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 00:59:18.479354 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479377 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:59:18.479397 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-10 00:59:18.479456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-10 00:59:18.479518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:59:18.479544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479551 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.479665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-10 00:59:18.479682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:59:18.479699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:59:18.479705 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.479719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-10 00:59:18.479740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.479746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-10 00:59:18.479752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-10 00:59:18.479757 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.479763 | orchestrator | 2026-03-10 00:59:18.479768 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-10 00:59:18.479774 | orchestrator | Tuesday 10 March 2026 00:56:13 +0000 (0:00:01.659) 0:04:37.588 ********* 2026-03-10 00:59:18.479780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-10 00:59:18.479790 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-10 00:59:18.479795 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.479806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-10 00:59:18.479812 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-10 00:59:18.479817 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.479823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-10 00:59:18.479828 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-10 00:59:18.479834 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.479839 | orchestrator | 2026-03-10 00:59:18.479844 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-10 00:59:18.479850 | orchestrator | Tuesday 10 March 2026 00:56:15 +0000 (0:00:02.371) 0:04:39.960 ********* 2026-03-10 00:59:18.479855 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.479861 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.479866 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.479871 | orchestrator | 2026-03-10 00:59:18.479877 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-10 00:59:18.479882 | orchestrator | Tuesday 10 March 2026 00:56:17 +0000 (0:00:01.410) 0:04:41.371 ********* 2026-03-10 00:59:18.479887 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.479893 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.479898 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.479903 | orchestrator | 2026-03-10 00:59:18.479908 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-10 00:59:18.479914 | orchestrator | Tuesday 10 March 2026 00:56:19 +0000 (0:00:02.261) 0:04:43.632 ********* 2026-03-10 00:59:18.479919 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.479924 | orchestrator | 2026-03-10 00:59:18.479931 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-10 00:59:18.479940 | orchestrator | Tuesday 10 March 2026 00:56:20 +0000 (0:00:01.300) 0:04:44.932 ********* 2026-03-10 00:59:18.479951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.479961 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.479996 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.480007 | orchestrator | 2026-03-10 00:59:18.480016 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-10 00:59:18.480025 | orchestrator | Tuesday 10 March 2026 00:56:24 +0000 (0:00:04.057) 0:04:48.990 ********* 2026-03-10 00:59:18.480034 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.480044 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.480054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.480063 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.480072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.480082 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.480088 | orchestrator | 2026-03-10 00:59:18.480093 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-10 00:59:18.480099 | orchestrator | Tuesday 10 March 2026 00:56:25 +0000 (0:00:00.697) 0:04:49.687 ********* 2026-03-10 00:59:18.480104 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480117 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.480128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480140 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.480146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480157 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.480162 | orchestrator | 2026-03-10 00:59:18.480168 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-10 00:59:18.480173 | orchestrator | Tuesday 10 March 2026 00:56:26 +0000 (0:00:00.890) 0:04:50.578 ********* 2026-03-10 00:59:18.480178 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.480184 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.480189 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.480194 | orchestrator | 2026-03-10 00:59:18.480200 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-10 00:59:18.480205 | orchestrator | Tuesday 10 March 2026 00:56:28 +0000 (0:00:02.036) 0:04:52.614 ********* 2026-03-10 00:59:18.480211 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.480216 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.480221 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.480227 | orchestrator | 2026-03-10 00:59:18.480232 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-10 00:59:18.480238 | orchestrator | Tuesday 10 March 2026 00:56:31 +0000 (0:00:02.504) 0:04:55.119 ********* 2026-03-10 00:59:18.480243 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.480249 | orchestrator | 2026-03-10 00:59:18.480254 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-10 00:59:18.480264 | orchestrator | Tuesday 10 March 2026 00:56:33 +0000 (0:00:02.117) 0:04:57.236 ********* 2026-03-10 00:59:18.480270 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.480278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.480291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.480298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.480305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.480315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.480323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.480335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.480342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.480349 | orchestrator | 2026-03-10 00:59:18.480355 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-10 00:59:18.480361 | orchestrator | Tuesday 10 March 2026 00:56:38 +0000 (0:00:05.430) 0:05:02.667 ********* 2026-03-10 00:59:18.480384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.480396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.480402 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.480408 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.480421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.480429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.480439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.480445 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.480452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.480459 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.480493 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.480500 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.480506 | orchestrator | 2026-03-10 00:59:18.480513 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-10 00:59:18.480520 | orchestrator | Tuesday 10 March 2026 00:56:40 +0000 (0:00:01.495) 0:05:04.162 ********* 2026-03-10 00:59:18.480526 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480533 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480556 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.480562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480595 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.480601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-10 00:59:18.480620 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.480627 | orchestrator | 2026-03-10 00:59:18.480633 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-10 00:59:18.480639 | orchestrator | Tuesday 10 March 2026 00:56:41 +0000 (0:00:01.157) 0:05:05.320 ********* 2026-03-10 00:59:18.480645 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.480650 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.480656 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.480661 | orchestrator | 2026-03-10 00:59:18.480667 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-10 00:59:18.480672 | orchestrator | Tuesday 10 March 2026 00:56:42 +0000 (0:00:01.465) 0:05:06.785 ********* 2026-03-10 00:59:18.480677 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.480683 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.480689 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.480694 | orchestrator | 2026-03-10 00:59:18.480699 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-10 00:59:18.480705 | orchestrator | Tuesday 10 March 2026 00:56:45 +0000 (0:00:02.390) 0:05:09.176 ********* 2026-03-10 00:59:18.480710 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.480715 | orchestrator | 2026-03-10 00:59:18.480721 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-10 00:59:18.480733 | orchestrator | Tuesday 10 March 2026 00:56:46 +0000 (0:00:01.820) 0:05:10.996 ********* 2026-03-10 00:59:18.480742 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-10 00:59:18.480748 | orchestrator | 2026-03-10 00:59:18.480754 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-10 00:59:18.480759 | orchestrator | Tuesday 10 March 2026 00:56:47 +0000 (0:00:01.044) 0:05:12.041 ********* 2026-03-10 00:59:18.480765 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-10 00:59:18.480771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-10 00:59:18.480776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-10 00:59:18.480782 | orchestrator | 2026-03-10 00:59:18.480788 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-10 00:59:18.480794 | orchestrator | Tuesday 10 March 2026 00:56:53 +0000 (0:00:05.735) 0:05:17.777 ********* 2026-03-10 00:59:18.480799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:59:18.480805 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.480811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:59:18.480816 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.480822 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:59:18.480832 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.480838 | orchestrator | 2026-03-10 00:59:18.480843 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-10 00:59:18.480849 | orchestrator | Tuesday 10 March 2026 00:56:55 +0000 (0:00:01.307) 0:05:19.084 ********* 2026-03-10 00:59:18.480857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-10 00:59:18.480866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-10 00:59:18.480872 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.480878 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-10 00:59:18.480888 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-10 00:59:18.480894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-10 00:59:18.480902 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-10 00:59:18.480907 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.480913 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.480918 | orchestrator | 2026-03-10 00:59:18.480924 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-10 00:59:18.480929 | orchestrator | Tuesday 10 March 2026 00:56:57 +0000 (0:00:02.396) 0:05:21.481 ********* 2026-03-10 00:59:18.480934 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.480940 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.480945 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.480950 | orchestrator | 2026-03-10 00:59:18.480956 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-10 00:59:18.480961 | orchestrator | Tuesday 10 March 2026 00:57:00 +0000 (0:00:03.030) 0:05:24.511 ********* 2026-03-10 00:59:18.480967 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.480972 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.480978 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.480983 | orchestrator | 2026-03-10 00:59:18.480988 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-10 00:59:18.480994 | orchestrator | Tuesday 10 March 2026 00:57:03 +0000 (0:00:03.502) 0:05:28.013 ********* 2026-03-10 00:59:18.480999 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-10 00:59:18.481005 | orchestrator | 2026-03-10 00:59:18.481010 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-10 00:59:18.481016 | orchestrator | Tuesday 10 March 2026 00:57:05 +0000 (0:00:01.556) 0:05:29.570 ********* 2026-03-10 00:59:18.481021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:59:18.481032 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.481038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:59:18.481043 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.481051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:59:18.481057 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.481063 | orchestrator | 2026-03-10 00:59:18.481071 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-10 00:59:18.481077 | orchestrator | Tuesday 10 March 2026 00:57:06 +0000 (0:00:01.406) 0:05:30.976 ********* 2026-03-10 00:59:18.481083 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:59:18.481088 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.481094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:59:18.481099 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.481105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-10 00:59:18.481111 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.481116 | orchestrator | 2026-03-10 00:59:18.481121 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-10 00:59:18.481127 | orchestrator | Tuesday 10 March 2026 00:57:08 +0000 (0:00:01.340) 0:05:32.317 ********* 2026-03-10 00:59:18.481137 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.481142 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.481148 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.481153 | orchestrator | 2026-03-10 00:59:18.481158 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-10 00:59:18.481164 | orchestrator | Tuesday 10 March 2026 00:57:10 +0000 (0:00:01.890) 0:05:34.207 ********* 2026-03-10 00:59:18.481169 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.481175 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.481180 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.481185 | orchestrator | 2026-03-10 00:59:18.481191 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-10 00:59:18.481196 | orchestrator | Tuesday 10 March 2026 00:57:12 +0000 (0:00:02.493) 0:05:36.701 ********* 2026-03-10 00:59:18.481202 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.481207 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.481213 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.481218 | orchestrator | 2026-03-10 00:59:18.481223 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-10 00:59:18.481229 | orchestrator | Tuesday 10 March 2026 00:57:15 +0000 (0:00:03.066) 0:05:39.768 ********* 2026-03-10 00:59:18.481234 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-10 00:59:18.481240 | orchestrator | 2026-03-10 00:59:18.481245 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-10 00:59:18.481251 | orchestrator | Tuesday 10 March 2026 00:57:16 +0000 (0:00:00.945) 0:05:40.713 ********* 2026-03-10 00:59:18.481256 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-10 00:59:18.481262 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.481274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-10 00:59:18.481280 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.481286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-10 00:59:18.481292 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.481297 | orchestrator | 2026-03-10 00:59:18.481303 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-10 00:59:18.481308 | orchestrator | Tuesday 10 March 2026 00:57:18 +0000 (0:00:01.421) 0:05:42.135 ********* 2026-03-10 00:59:18.481314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-10 00:59:18.481323 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.481329 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-10 00:59:18.481335 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.481340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-10 00:59:18.481346 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.481351 | orchestrator | 2026-03-10 00:59:18.481357 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-10 00:59:18.481362 | orchestrator | Tuesday 10 March 2026 00:57:19 +0000 (0:00:01.573) 0:05:43.708 ********* 2026-03-10 00:59:18.481367 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.481373 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.481378 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.481383 | orchestrator | 2026-03-10 00:59:18.481389 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-10 00:59:18.481394 | orchestrator | Tuesday 10 March 2026 00:57:21 +0000 (0:00:01.822) 0:05:45.531 ********* 2026-03-10 00:59:18.481400 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.481405 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.481410 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.481416 | orchestrator | 2026-03-10 00:59:18.481421 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-10 00:59:18.481427 | orchestrator | Tuesday 10 March 2026 00:57:24 +0000 (0:00:02.637) 0:05:48.169 ********* 2026-03-10 00:59:18.481432 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.481437 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.481443 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.481448 | orchestrator | 2026-03-10 00:59:18.481453 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-10 00:59:18.481459 | orchestrator | Tuesday 10 March 2026 00:57:27 +0000 (0:00:03.719) 0:05:51.889 ********* 2026-03-10 00:59:18.481464 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.481470 | orchestrator | 2026-03-10 00:59:18.481493 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-10 00:59:18.481502 | orchestrator | Tuesday 10 March 2026 00:57:29 +0000 (0:00:01.715) 0:05:53.604 ********* 2026-03-10 00:59:18.481522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.481534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 00:59:18.481540 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.481546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.481552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.481558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.481567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 00:59:18.481576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.481582 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.481587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.481593 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.481652 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 00:59:18.481676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.481686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.481692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.481698 | orchestrator | 2026-03-10 00:59:18.481703 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-10 00:59:18.481709 | orchestrator | Tuesday 10 March 2026 00:57:33 +0000 (0:00:03.971) 0:05:57.575 ********* 2026-03-10 00:59:18.481714 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.481720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 00:59:18.481726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.481845 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.481859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.481865 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.481871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.481877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 00:59:18.481882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.481888 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.481923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.481934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 00:59:18.481940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.481949 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.481959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.481969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 00:59:18.481977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 00:59:18.481986 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.481996 | orchestrator | 2026-03-10 00:59:18.482005 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-10 00:59:18.482046 | orchestrator | Tuesday 10 March 2026 00:57:34 +0000 (0:00:00.828) 0:05:58.404 ********* 2026-03-10 00:59:18.482057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-10 00:59:18.482068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-10 00:59:18.482078 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.482111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-10 00:59:18.482122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-10 00:59:18.482128 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.482133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-10 00:59:18.482138 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-10 00:59:18.482144 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.482149 | orchestrator | 2026-03-10 00:59:18.482155 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-10 00:59:18.482160 | orchestrator | Tuesday 10 March 2026 00:57:36 +0000 (0:00:01.760) 0:06:00.165 ********* 2026-03-10 00:59:18.482165 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.482171 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.482176 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.482182 | orchestrator | 2026-03-10 00:59:18.482187 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-10 00:59:18.482192 | orchestrator | Tuesday 10 March 2026 00:57:37 +0000 (0:00:01.430) 0:06:01.595 ********* 2026-03-10 00:59:18.482198 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.482203 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.482208 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.482213 | orchestrator | 2026-03-10 00:59:18.482219 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-10 00:59:18.482224 | orchestrator | Tuesday 10 March 2026 00:57:39 +0000 (0:00:02.284) 0:06:03.879 ********* 2026-03-10 00:59:18.482229 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.482235 | orchestrator | 2026-03-10 00:59:18.482240 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-10 00:59:18.482245 | orchestrator | Tuesday 10 March 2026 00:57:41 +0000 (0:00:01.854) 0:06:05.734 ********* 2026-03-10 00:59:18.482251 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:18.482263 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:18.482288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 00:59:18.482296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:18.482303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:18.482310 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 00:59:18.482320 | orchestrator | 2026-03-10 00:59:18.482326 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-10 00:59:18.482331 | orchestrator | Tuesday 10 March 2026 00:57:47 +0000 (0:00:05.550) 0:06:11.285 ********* 2026-03-10 00:59:18.482355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 00:59:18.482362 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 00:59:18.482368 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.482374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 00:59:18.482384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 00:59:18.482390 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.482412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 00:59:18.482424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 00:59:18.482430 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.482436 | orchestrator | 2026-03-10 00:59:18.482441 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-10 00:59:18.482447 | orchestrator | Tuesday 10 March 2026 00:57:47 +0000 (0:00:00.704) 0:06:11.990 ********* 2026-03-10 00:59:18.482453 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-10 00:59:18.482460 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-10 00:59:18.482466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-10 00:59:18.482495 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.482502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-10 00:59:18.482508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-10 00:59:18.482514 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-10 00:59:18.482520 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.482527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-10 00:59:18.482533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-10 00:59:18.482539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-10 00:59:18.482545 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.482551 | orchestrator | 2026-03-10 00:59:18.482558 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-10 00:59:18.482564 | orchestrator | Tuesday 10 March 2026 00:57:48 +0000 (0:00:00.947) 0:06:12.938 ********* 2026-03-10 00:59:18.482570 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.482576 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.482582 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.482588 | orchestrator | 2026-03-10 00:59:18.482594 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-10 00:59:18.482601 | orchestrator | Tuesday 10 March 2026 00:57:49 +0000 (0:00:00.942) 0:06:13.880 ********* 2026-03-10 00:59:18.482607 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.482613 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.482619 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.482625 | orchestrator | 2026-03-10 00:59:18.482649 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-10 00:59:18.482660 | orchestrator | Tuesday 10 March 2026 00:57:51 +0000 (0:00:01.488) 0:06:15.368 ********* 2026-03-10 00:59:18.482666 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.482672 | orchestrator | 2026-03-10 00:59:18.482678 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-10 00:59:18.482684 | orchestrator | Tuesday 10 March 2026 00:57:52 +0000 (0:00:01.546) 0:06:16.915 ********* 2026-03-10 00:59:18.482691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 00:59:18.482702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 00:59:18.482710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 00:59:18.482731 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 00:59:18.482757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 00:59:18.482764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 00:59:18.482773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 00:59:18.482779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482796 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 00:59:18.482826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 00:59:18.482836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 00:59:18.482842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-10 00:59:18.482848 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 00:59:18.482871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 00:59:18.482884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 00:59:18.482890 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-10 00:59:18.482896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-10 00:59:18.482906 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 00:59:18.482943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 00:59:18.482948 | orchestrator | 2026-03-10 00:59:18.482954 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-10 00:59:18.482960 | orchestrator | Tuesday 10 March 2026 00:57:58 +0000 (0:00:05.456) 0:06:22.372 ********* 2026-03-10 00:59:18.482965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-10 00:59:18.482971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 00:59:18.482982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482993 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.482998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 00:59:18.483004 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-10 00:59:18.483010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-10 00:59:18.483016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.483029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-10 00:59:18.483041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.483047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 00:59:18.483052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 00:59:18.483058 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.483064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.483069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.483075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 00:59:18.483087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-10 00:59:18.483099 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-10 00:59:18.483105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-10 00:59:18.483111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 00:59:18.483117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.483122 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.483132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.483144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 00:59:18.483151 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.483156 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.483162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 00:59:18.483168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-10 00:59:18.483174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-10 00:59:18.483183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.483196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 00:59:18.483202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 00:59:18.483207 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.483213 | orchestrator | 2026-03-10 00:59:18.483218 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-10 00:59:18.483224 | orchestrator | Tuesday 10 March 2026 00:57:59 +0000 (0:00:01.040) 0:06:23.413 ********* 2026-03-10 00:59:18.483229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-10 00:59:18.483235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-10 00:59:18.483241 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-10 00:59:18.483247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-10 00:59:18.483253 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.483258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-10 00:59:18.483264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-10 00:59:18.483270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-10 00:59:18.483275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-10 00:59:18.483285 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.483290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-10 00:59:18.483296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-10 00:59:18.483301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-10 00:59:18.483313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-10 00:59:18.483318 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.483324 | orchestrator | 2026-03-10 00:59:18.483330 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-10 00:59:18.483335 | orchestrator | Tuesday 10 March 2026 00:58:00 +0000 (0:00:01.284) 0:06:24.698 ********* 2026-03-10 00:59:18.483340 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.483346 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.483351 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.483356 | orchestrator | 2026-03-10 00:59:18.483362 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-10 00:59:18.483367 | orchestrator | Tuesday 10 March 2026 00:58:01 +0000 (0:00:00.510) 0:06:25.208 ********* 2026-03-10 00:59:18.483372 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.483378 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.483383 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.483388 | orchestrator | 2026-03-10 00:59:18.483394 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-10 00:59:18.483399 | orchestrator | Tuesday 10 March 2026 00:58:02 +0000 (0:00:01.619) 0:06:26.828 ********* 2026-03-10 00:59:18.483404 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.483410 | orchestrator | 2026-03-10 00:59:18.483415 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-10 00:59:18.483420 | orchestrator | Tuesday 10 March 2026 00:58:04 +0000 (0:00:02.067) 0:06:28.896 ********* 2026-03-10 00:59:18.483426 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:59:18.483437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:59:18.483443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-10 00:59:18.483449 | orchestrator | 2026-03-10 00:59:18.483457 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-10 00:59:18.483465 | orchestrator | Tuesday 10 March 2026 00:58:08 +0000 (0:00:03.212) 0:06:32.108 ********* 2026-03-10 00:59:18.483511 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-10 00:59:18.483520 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.483526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-10 00:59:18.483536 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.483542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-10 00:59:18.483548 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.483553 | orchestrator | 2026-03-10 00:59:18.483558 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-10 00:59:18.483564 | orchestrator | Tuesday 10 March 2026 00:58:08 +0000 (0:00:00.885) 0:06:32.994 ********* 2026-03-10 00:59:18.483569 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-10 00:59:18.483575 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.483580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-10 00:59:18.483586 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.483591 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-10 00:59:18.483597 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.483602 | orchestrator | 2026-03-10 00:59:18.483608 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-10 00:59:18.483613 | orchestrator | Tuesday 10 March 2026 00:58:09 +0000 (0:00:00.829) 0:06:33.823 ********* 2026-03-10 00:59:18.483622 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.483627 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.483637 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.483642 | orchestrator | 2026-03-10 00:59:18.483648 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-10 00:59:18.483653 | orchestrator | Tuesday 10 March 2026 00:58:10 +0000 (0:00:00.505) 0:06:34.329 ********* 2026-03-10 00:59:18.483658 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.483664 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.483669 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.483674 | orchestrator | 2026-03-10 00:59:18.483680 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-10 00:59:18.483685 | orchestrator | Tuesday 10 March 2026 00:58:12 +0000 (0:00:01.815) 0:06:36.144 ********* 2026-03-10 00:59:18.483691 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 00:59:18.483696 | orchestrator | 2026-03-10 00:59:18.483701 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-10 00:59:18.483707 | orchestrator | Tuesday 10 March 2026 00:58:14 +0000 (0:00:02.302) 0:06:38.447 ********* 2026-03-10 00:59:18.483713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.483723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.483729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.483742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.483749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.483761 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-10 00:59:18.483766 | orchestrator | 2026-03-10 00:59:18.483772 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-10 00:59:18.483777 | orchestrator | Tuesday 10 March 2026 00:58:21 +0000 (0:00:07.131) 0:06:45.579 ********* 2026-03-10 00:59:18.483783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.483794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.483800 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.483806 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.483815 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.483821 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.483826 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.483832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-10 00:59:18.483837 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.483843 | orchestrator | 2026-03-10 00:59:18.483849 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-10 00:59:18.483857 | orchestrator | Tuesday 10 March 2026 00:58:22 +0000 (0:00:00.818) 0:06:46.397 ********* 2026-03-10 00:59:18.483866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-10 00:59:18.483872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-10 00:59:18.483881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-10 00:59:18.483887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-10 00:59:18.483892 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.483898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-10 00:59:18.483903 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-10 00:59:18.483909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-10 00:59:18.483914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-10 00:59:18.483920 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.483925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-10 00:59:18.483934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-10 00:59:18.483943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-10 00:59:18.483952 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-10 00:59:18.483961 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.483971 | orchestrator | 2026-03-10 00:59:18.483978 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-10 00:59:18.483984 | orchestrator | Tuesday 10 March 2026 00:58:24 +0000 (0:00:01.941) 0:06:48.338 ********* 2026-03-10 00:59:18.483989 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.483995 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.484000 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.484005 | orchestrator | 2026-03-10 00:59:18.484011 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-10 00:59:18.484016 | orchestrator | Tuesday 10 March 2026 00:58:25 +0000 (0:00:01.536) 0:06:49.875 ********* 2026-03-10 00:59:18.484022 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.484027 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.484032 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.484038 | orchestrator | 2026-03-10 00:59:18.484043 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-10 00:59:18.484048 | orchestrator | Tuesday 10 March 2026 00:58:28 +0000 (0:00:02.437) 0:06:52.313 ********* 2026-03-10 00:59:18.484057 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.484062 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.484067 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.484072 | orchestrator | 2026-03-10 00:59:18.484077 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-10 00:59:18.484081 | orchestrator | Tuesday 10 March 2026 00:58:28 +0000 (0:00:00.382) 0:06:52.695 ********* 2026-03-10 00:59:18.484086 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.484091 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.484096 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.484100 | orchestrator | 2026-03-10 00:59:18.484105 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-10 00:59:18.484113 | orchestrator | Tuesday 10 March 2026 00:58:29 +0000 (0:00:00.368) 0:06:53.064 ********* 2026-03-10 00:59:18.484118 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.484126 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.484131 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.484135 | orchestrator | 2026-03-10 00:59:18.484140 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-10 00:59:18.484145 | orchestrator | Tuesday 10 March 2026 00:58:29 +0000 (0:00:00.745) 0:06:53.810 ********* 2026-03-10 00:59:18.484150 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.484155 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.484159 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.484164 | orchestrator | 2026-03-10 00:59:18.484169 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-10 00:59:18.484174 | orchestrator | Tuesday 10 March 2026 00:58:30 +0000 (0:00:00.331) 0:06:54.142 ********* 2026-03-10 00:59:18.484178 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.484183 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.484188 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.484193 | orchestrator | 2026-03-10 00:59:18.484197 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-10 00:59:18.484202 | orchestrator | Tuesday 10 March 2026 00:58:30 +0000 (0:00:00.355) 0:06:54.498 ********* 2026-03-10 00:59:18.484207 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.484212 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.484216 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.484221 | orchestrator | 2026-03-10 00:59:18.484226 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-10 00:59:18.484231 | orchestrator | Tuesday 10 March 2026 00:58:31 +0000 (0:00:00.921) 0:06:55.419 ********* 2026-03-10 00:59:18.484236 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.484240 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.484245 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.484250 | orchestrator | 2026-03-10 00:59:18.484255 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-10 00:59:18.484260 | orchestrator | Tuesday 10 March 2026 00:58:32 +0000 (0:00:00.726) 0:06:56.146 ********* 2026-03-10 00:59:18.484265 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.484269 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.484274 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.484279 | orchestrator | 2026-03-10 00:59:18.484284 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-10 00:59:18.484289 | orchestrator | Tuesday 10 March 2026 00:58:32 +0000 (0:00:00.401) 0:06:56.548 ********* 2026-03-10 00:59:18.484294 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.484298 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.484303 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.484308 | orchestrator | 2026-03-10 00:59:18.484313 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-10 00:59:18.484317 | orchestrator | Tuesday 10 March 2026 00:58:33 +0000 (0:00:00.921) 0:06:57.469 ********* 2026-03-10 00:59:18.484322 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.484327 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.484335 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.484340 | orchestrator | 2026-03-10 00:59:18.484345 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-10 00:59:18.484350 | orchestrator | Tuesday 10 March 2026 00:58:34 +0000 (0:00:01.357) 0:06:58.827 ********* 2026-03-10 00:59:18.484354 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.484359 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.484364 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.484368 | orchestrator | 2026-03-10 00:59:18.484373 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-10 00:59:18.484378 | orchestrator | Tuesday 10 March 2026 00:58:35 +0000 (0:00:00.985) 0:06:59.812 ********* 2026-03-10 00:59:18.484386 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.484394 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.484406 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.484415 | orchestrator | 2026-03-10 00:59:18.484422 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-10 00:59:18.484429 | orchestrator | Tuesday 10 March 2026 00:58:41 +0000 (0:00:05.315) 0:07:05.128 ********* 2026-03-10 00:59:18.484436 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.484443 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.484450 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.484458 | orchestrator | 2026-03-10 00:59:18.484465 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-10 00:59:18.484485 | orchestrator | Tuesday 10 March 2026 00:58:44 +0000 (0:00:03.801) 0:07:08.929 ********* 2026-03-10 00:59:18.484493 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.484500 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.484507 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.484514 | orchestrator | 2026-03-10 00:59:18.484522 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-10 00:59:18.484529 | orchestrator | Tuesday 10 March 2026 00:59:00 +0000 (0:00:15.754) 0:07:24.684 ********* 2026-03-10 00:59:18.484537 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.484544 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.484551 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.484558 | orchestrator | 2026-03-10 00:59:18.484565 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-10 00:59:18.484573 | orchestrator | Tuesday 10 March 2026 00:59:01 +0000 (0:00:00.781) 0:07:25.466 ********* 2026-03-10 00:59:18.484580 | orchestrator | changed: [testbed-node-0] 2026-03-10 00:59:18.484587 | orchestrator | changed: [testbed-node-1] 2026-03-10 00:59:18.484594 | orchestrator | changed: [testbed-node-2] 2026-03-10 00:59:18.484601 | orchestrator | 2026-03-10 00:59:18.484608 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-10 00:59:18.484614 | orchestrator | Tuesday 10 March 2026 00:59:06 +0000 (0:00:04.686) 0:07:30.152 ********* 2026-03-10 00:59:18.484621 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.484628 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.484635 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.484642 | orchestrator | 2026-03-10 00:59:18.484649 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-10 00:59:18.484656 | orchestrator | Tuesday 10 March 2026 00:59:06 +0000 (0:00:00.395) 0:07:30.547 ********* 2026-03-10 00:59:18.484663 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.484676 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.484683 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.484692 | orchestrator | 2026-03-10 00:59:18.484704 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-10 00:59:18.484713 | orchestrator | Tuesday 10 March 2026 00:59:07 +0000 (0:00:00.790) 0:07:31.338 ********* 2026-03-10 00:59:18.484722 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.484728 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.484733 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.484743 | orchestrator | 2026-03-10 00:59:18.484751 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-10 00:59:18.484759 | orchestrator | Tuesday 10 March 2026 00:59:07 +0000 (0:00:00.371) 0:07:31.709 ********* 2026-03-10 00:59:18.484766 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.484773 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.484781 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.484788 | orchestrator | 2026-03-10 00:59:18.484796 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-10 00:59:18.484804 | orchestrator | Tuesday 10 March 2026 00:59:08 +0000 (0:00:00.420) 0:07:32.130 ********* 2026-03-10 00:59:18.484812 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.484820 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.484827 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.484835 | orchestrator | 2026-03-10 00:59:18.484842 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-10 00:59:18.484850 | orchestrator | Tuesday 10 March 2026 00:59:08 +0000 (0:00:00.364) 0:07:32.494 ********* 2026-03-10 00:59:18.484857 | orchestrator | skipping: [testbed-node-0] 2026-03-10 00:59:18.484866 | orchestrator | skipping: [testbed-node-1] 2026-03-10 00:59:18.484874 | orchestrator | skipping: [testbed-node-2] 2026-03-10 00:59:18.484882 | orchestrator | 2026-03-10 00:59:18.484890 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-10 00:59:18.484899 | orchestrator | Tuesday 10 March 2026 00:59:08 +0000 (0:00:00.371) 0:07:32.865 ********* 2026-03-10 00:59:18.484903 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.484908 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.484913 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.484918 | orchestrator | 2026-03-10 00:59:18.484922 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-10 00:59:18.484927 | orchestrator | Tuesday 10 March 2026 00:59:14 +0000 (0:00:05.223) 0:07:38.088 ********* 2026-03-10 00:59:18.484932 | orchestrator | ok: [testbed-node-0] 2026-03-10 00:59:18.484936 | orchestrator | ok: [testbed-node-1] 2026-03-10 00:59:18.484941 | orchestrator | ok: [testbed-node-2] 2026-03-10 00:59:18.484946 | orchestrator | 2026-03-10 00:59:18.484951 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 00:59:18.484956 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-10 00:59:18.484961 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-10 00:59:18.484966 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-10 00:59:18.484970 | orchestrator | 2026-03-10 00:59:18.484975 | orchestrator | 2026-03-10 00:59:18.484980 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 00:59:18.484985 | orchestrator | Tuesday 10 March 2026 00:59:15 +0000 (0:00:00.980) 0:07:39.069 ********* 2026-03-10 00:59:18.484989 | orchestrator | =============================================================================== 2026-03-10 00:59:18.484994 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 15.75s 2026-03-10 00:59:18.484999 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 7.38s 2026-03-10 00:59:18.485004 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.13s 2026-03-10 00:59:18.485008 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 6.49s 2026-03-10 00:59:18.485013 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 6.02s 2026-03-10 00:59:18.485018 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 5.94s 2026-03-10 00:59:18.485022 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.85s 2026-03-10 00:59:18.485027 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.74s 2026-03-10 00:59:18.485037 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.55s 2026-03-10 00:59:18.485042 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 5.46s 2026-03-10 00:59:18.485046 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.43s 2026-03-10 00:59:18.485051 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 5.42s 2026-03-10 00:59:18.485056 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 5.32s 2026-03-10 00:59:18.485060 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.22s 2026-03-10 00:59:18.485065 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.18s 2026-03-10 00:59:18.485070 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 5.08s 2026-03-10 00:59:18.485074 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 4.90s 2026-03-10 00:59:18.485079 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.69s 2026-03-10 00:59:18.485084 | orchestrator | haproxy-config : Copying over ceph-rgw haproxy config ------------------- 4.61s 2026-03-10 00:59:18.485089 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.54s 2026-03-10 00:59:18.485101 | orchestrator | 2026-03-10 00:59:18 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:18.485107 | orchestrator | 2026-03-10 00:59:18 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:18.485112 | orchestrator | 2026-03-10 00:59:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:21.529196 | orchestrator | 2026-03-10 00:59:21 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:21.530811 | orchestrator | 2026-03-10 00:59:21 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:21.532835 | orchestrator | 2026-03-10 00:59:21 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:21.532902 | orchestrator | 2026-03-10 00:59:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:24.584692 | orchestrator | 2026-03-10 00:59:24 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:24.585559 | orchestrator | 2026-03-10 00:59:24 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:24.586834 | orchestrator | 2026-03-10 00:59:24 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:24.586883 | orchestrator | 2026-03-10 00:59:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:27.629845 | orchestrator | 2026-03-10 00:59:27 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:27.637272 | orchestrator | 2026-03-10 00:59:27 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:27.639949 | orchestrator | 2026-03-10 00:59:27 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:27.640032 | orchestrator | 2026-03-10 00:59:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:30.681698 | orchestrator | 2026-03-10 00:59:30 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:30.683998 | orchestrator | 2026-03-10 00:59:30 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:30.686253 | orchestrator | 2026-03-10 00:59:30 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:30.686761 | orchestrator | 2026-03-10 00:59:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:33.725742 | orchestrator | 2026-03-10 00:59:33 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:33.727378 | orchestrator | 2026-03-10 00:59:33 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:33.727828 | orchestrator | 2026-03-10 00:59:33 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:33.727871 | orchestrator | 2026-03-10 00:59:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:36.767161 | orchestrator | 2026-03-10 00:59:36 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:36.769649 | orchestrator | 2026-03-10 00:59:36 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:36.771409 | orchestrator | 2026-03-10 00:59:36 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:36.771500 | orchestrator | 2026-03-10 00:59:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:39.803979 | orchestrator | 2026-03-10 00:59:39 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:39.805542 | orchestrator | 2026-03-10 00:59:39 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:39.808009 | orchestrator | 2026-03-10 00:59:39 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:39.808057 | orchestrator | 2026-03-10 00:59:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:42.860390 | orchestrator | 2026-03-10 00:59:42 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:42.862575 | orchestrator | 2026-03-10 00:59:42 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:42.864365 | orchestrator | 2026-03-10 00:59:42 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:42.864418 | orchestrator | 2026-03-10 00:59:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:45.900406 | orchestrator | 2026-03-10 00:59:45 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:45.902139 | orchestrator | 2026-03-10 00:59:45 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:45.902825 | orchestrator | 2026-03-10 00:59:45 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:45.902878 | orchestrator | 2026-03-10 00:59:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:48.984641 | orchestrator | 2026-03-10 00:59:48 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:48.985103 | orchestrator | 2026-03-10 00:59:48 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:48.986194 | orchestrator | 2026-03-10 00:59:48 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:48.986232 | orchestrator | 2026-03-10 00:59:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:52.035705 | orchestrator | 2026-03-10 00:59:52 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:52.036234 | orchestrator | 2026-03-10 00:59:52 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:52.037392 | orchestrator | 2026-03-10 00:59:52 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:52.037417 | orchestrator | 2026-03-10 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:55.144874 | orchestrator | 2026-03-10 00:59:55 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:55.148144 | orchestrator | 2026-03-10 00:59:55 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:55.149602 | orchestrator | 2026-03-10 00:59:55 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:55.149655 | orchestrator | 2026-03-10 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 00:59:58.225796 | orchestrator | 2026-03-10 00:59:58 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 00:59:58.230060 | orchestrator | 2026-03-10 00:59:58 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 00:59:58.233957 | orchestrator | 2026-03-10 00:59:58 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 00:59:58.234917 | orchestrator | 2026-03-10 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:01.282085 | orchestrator | 2026-03-10 01:00:01 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:01.284993 | orchestrator | 2026-03-10 01:00:01 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:01.287235 | orchestrator | 2026-03-10 01:00:01 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:01.287659 | orchestrator | 2026-03-10 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:04.325904 | orchestrator | 2026-03-10 01:00:04 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:04.327601 | orchestrator | 2026-03-10 01:00:04 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:04.328754 | orchestrator | 2026-03-10 01:00:04 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:04.329062 | orchestrator | 2026-03-10 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:07.375919 | orchestrator | 2026-03-10 01:00:07 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:07.376110 | orchestrator | 2026-03-10 01:00:07 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:07.379031 | orchestrator | 2026-03-10 01:00:07 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:07.379120 | orchestrator | 2026-03-10 01:00:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:10.417003 | orchestrator | 2026-03-10 01:00:10 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:10.417783 | orchestrator | 2026-03-10 01:00:10 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:10.418844 | orchestrator | 2026-03-10 01:00:10 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:10.418910 | orchestrator | 2026-03-10 01:00:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:13.476346 | orchestrator | 2026-03-10 01:00:13 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:13.476623 | orchestrator | 2026-03-10 01:00:13 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:13.478147 | orchestrator | 2026-03-10 01:00:13 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:13.478203 | orchestrator | 2026-03-10 01:00:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:16.519116 | orchestrator | 2026-03-10 01:00:16 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:16.520225 | orchestrator | 2026-03-10 01:00:16 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:16.521833 | orchestrator | 2026-03-10 01:00:16 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:16.521871 | orchestrator | 2026-03-10 01:00:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:19.571701 | orchestrator | 2026-03-10 01:00:19 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:19.573859 | orchestrator | 2026-03-10 01:00:19 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:19.577506 | orchestrator | 2026-03-10 01:00:19 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:19.577637 | orchestrator | 2026-03-10 01:00:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:22.620005 | orchestrator | 2026-03-10 01:00:22 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:22.622764 | orchestrator | 2026-03-10 01:00:22 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:22.624164 | orchestrator | 2026-03-10 01:00:22 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:22.624268 | orchestrator | 2026-03-10 01:00:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:25.679761 | orchestrator | 2026-03-10 01:00:25 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:25.681161 | orchestrator | 2026-03-10 01:00:25 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:25.682719 | orchestrator | 2026-03-10 01:00:25 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:25.682761 | orchestrator | 2026-03-10 01:00:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:28.736921 | orchestrator | 2026-03-10 01:00:28 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:28.738114 | orchestrator | 2026-03-10 01:00:28 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:28.740345 | orchestrator | 2026-03-10 01:00:28 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:28.740384 | orchestrator | 2026-03-10 01:00:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:31.794291 | orchestrator | 2026-03-10 01:00:31 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:31.796342 | orchestrator | 2026-03-10 01:00:31 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:31.797886 | orchestrator | 2026-03-10 01:00:31 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:31.798274 | orchestrator | 2026-03-10 01:00:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:34.852951 | orchestrator | 2026-03-10 01:00:34 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:34.855707 | orchestrator | 2026-03-10 01:00:34 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:34.858383 | orchestrator | 2026-03-10 01:00:34 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:34.858740 | orchestrator | 2026-03-10 01:00:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:37.904804 | orchestrator | 2026-03-10 01:00:37 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:37.907616 | orchestrator | 2026-03-10 01:00:37 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:37.909619 | orchestrator | 2026-03-10 01:00:37 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:37.909672 | orchestrator | 2026-03-10 01:00:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:40.956125 | orchestrator | 2026-03-10 01:00:40 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:40.957547 | orchestrator | 2026-03-10 01:00:40 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:40.960144 | orchestrator | 2026-03-10 01:00:40 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:40.960224 | orchestrator | 2026-03-10 01:00:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:44.015951 | orchestrator | 2026-03-10 01:00:44 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:44.018213 | orchestrator | 2026-03-10 01:00:44 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:44.019772 | orchestrator | 2026-03-10 01:00:44 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:44.019834 | orchestrator | 2026-03-10 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:47.080243 | orchestrator | 2026-03-10 01:00:47 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:47.082368 | orchestrator | 2026-03-10 01:00:47 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:47.084480 | orchestrator | 2026-03-10 01:00:47 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:47.084521 | orchestrator | 2026-03-10 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:50.131550 | orchestrator | 2026-03-10 01:00:50 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:50.132619 | orchestrator | 2026-03-10 01:00:50 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:50.133851 | orchestrator | 2026-03-10 01:00:50 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:50.134136 | orchestrator | 2026-03-10 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:53.183937 | orchestrator | 2026-03-10 01:00:53 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:53.185955 | orchestrator | 2026-03-10 01:00:53 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:53.188288 | orchestrator | 2026-03-10 01:00:53 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:53.188787 | orchestrator | 2026-03-10 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:56.234871 | orchestrator | 2026-03-10 01:00:56 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:56.236152 | orchestrator | 2026-03-10 01:00:56 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:56.239774 | orchestrator | 2026-03-10 01:00:56 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:56.239835 | orchestrator | 2026-03-10 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:00:59.279916 | orchestrator | 2026-03-10 01:00:59 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:00:59.281444 | orchestrator | 2026-03-10 01:00:59 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:00:59.283476 | orchestrator | 2026-03-10 01:00:59 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:00:59.284547 | orchestrator | 2026-03-10 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:02.349432 | orchestrator | 2026-03-10 01:01:02 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:01:02.353187 | orchestrator | 2026-03-10 01:01:02 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:02.356561 | orchestrator | 2026-03-10 01:01:02 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:02.357966 | orchestrator | 2026-03-10 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:05.398844 | orchestrator | 2026-03-10 01:01:05 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:01:05.407860 | orchestrator | 2026-03-10 01:01:05 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:05.417567 | orchestrator | 2026-03-10 01:01:05 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:05.417682 | orchestrator | 2026-03-10 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:08.463875 | orchestrator | 2026-03-10 01:01:08 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:01:08.465434 | orchestrator | 2026-03-10 01:01:08 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:08.468557 | orchestrator | 2026-03-10 01:01:08 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:08.468629 | orchestrator | 2026-03-10 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:11.518773 | orchestrator | 2026-03-10 01:01:11 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:01:11.520854 | orchestrator | 2026-03-10 01:01:11 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:11.524243 | orchestrator | 2026-03-10 01:01:11 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:11.524317 | orchestrator | 2026-03-10 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:14.566215 | orchestrator | 2026-03-10 01:01:14 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:01:14.568432 | orchestrator | 2026-03-10 01:01:14 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:14.570490 | orchestrator | 2026-03-10 01:01:14 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:14.570527 | orchestrator | 2026-03-10 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:17.617357 | orchestrator | 2026-03-10 01:01:17 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:01:17.618971 | orchestrator | 2026-03-10 01:01:17 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:17.620439 | orchestrator | 2026-03-10 01:01:17 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:17.620512 | orchestrator | 2026-03-10 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:20.664420 | orchestrator | 2026-03-10 01:01:20 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:01:20.668269 | orchestrator | 2026-03-10 01:01:20 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:20.671198 | orchestrator | 2026-03-10 01:01:20 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:20.671344 | orchestrator | 2026-03-10 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:23.722695 | orchestrator | 2026-03-10 01:01:23 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:01:23.724641 | orchestrator | 2026-03-10 01:01:23 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:23.726085 | orchestrator | 2026-03-10 01:01:23 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:23.726473 | orchestrator | 2026-03-10 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:26.771317 | orchestrator | 2026-03-10 01:01:26 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state STARTED 2026-03-10 01:01:26.772979 | orchestrator | 2026-03-10 01:01:26 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:26.775129 | orchestrator | 2026-03-10 01:01:26 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:26.775213 | orchestrator | 2026-03-10 01:01:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:29.832774 | orchestrator | 2026-03-10 01:01:29 | INFO  | Task fca26bd9-e727-4e30-8116-6d36e203d006 is in state SUCCESS 2026-03-10 01:01:29.833839 | orchestrator | 2026-03-10 01:01:29.833869 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-10 01:01:29.833874 | orchestrator | 2.16.14 2026-03-10 01:01:29.833899 | orchestrator | 2026-03-10 01:01:29.833906 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-10 01:01:29.833912 | orchestrator | 2026-03-10 01:01:29.833918 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-10 01:01:29.833925 | orchestrator | Tuesday 10 March 2026 00:48:55 +0000 (0:00:01.200) 0:00:01.200 ********* 2026-03-10 01:01:29.833933 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.833939 | orchestrator | 2026-03-10 01:01:29.833946 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-10 01:01:29.833952 | orchestrator | Tuesday 10 March 2026 00:48:57 +0000 (0:00:01.525) 0:00:02.726 ********* 2026-03-10 01:01:29.833957 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.833964 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.833968 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.833973 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.833978 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.833982 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.833987 | orchestrator | 2026-03-10 01:01:29.833993 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-10 01:01:29.834005 | orchestrator | Tuesday 10 March 2026 00:48:59 +0000 (0:00:02.164) 0:00:04.890 ********* 2026-03-10 01:01:29.834009 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.834012 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.834035 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.834038 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.834041 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.834044 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.834047 | orchestrator | 2026-03-10 01:01:29.834050 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-10 01:01:29.834054 | orchestrator | Tuesday 10 March 2026 00:49:00 +0000 (0:00:01.450) 0:00:06.341 ********* 2026-03-10 01:01:29.834057 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.834060 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.834063 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.834066 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.834069 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.834072 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.834075 | orchestrator | 2026-03-10 01:01:29.834079 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-10 01:01:29.834092 | orchestrator | Tuesday 10 March 2026 00:49:01 +0000 (0:00:01.109) 0:00:07.450 ********* 2026-03-10 01:01:29.834095 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.834098 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.834101 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.834104 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.834107 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.834111 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.834114 | orchestrator | 2026-03-10 01:01:29.834117 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-10 01:01:29.834120 | orchestrator | Tuesday 10 March 2026 00:49:03 +0000 (0:00:01.064) 0:00:08.515 ********* 2026-03-10 01:01:29.834123 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.834128 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.834133 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.834141 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.834147 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.834152 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.834157 | orchestrator | 2026-03-10 01:01:29.834162 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-10 01:01:29.834167 | orchestrator | Tuesday 10 March 2026 00:49:03 +0000 (0:00:00.714) 0:00:09.229 ********* 2026-03-10 01:01:29.834173 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.834178 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.834183 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.834188 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.834193 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.834199 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.834204 | orchestrator | 2026-03-10 01:01:29.834210 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-10 01:01:29.834216 | orchestrator | Tuesday 10 March 2026 00:49:04 +0000 (0:00:00.944) 0:00:10.174 ********* 2026-03-10 01:01:29.834274 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.834281 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.834287 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.834307 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.834314 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.834320 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.834325 | orchestrator | 2026-03-10 01:01:29.834331 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-10 01:01:29.834349 | orchestrator | Tuesday 10 March 2026 00:49:05 +0000 (0:00:01.195) 0:00:11.370 ********* 2026-03-10 01:01:29.834364 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.834386 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.834392 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.834408 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.834415 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.834443 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.834470 | orchestrator | 2026-03-10 01:01:29.834475 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-10 01:01:29.834479 | orchestrator | Tuesday 10 March 2026 00:49:07 +0000 (0:00:01.529) 0:00:12.899 ********* 2026-03-10 01:01:29.834483 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 01:01:29.834487 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:01:29.834490 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:01:29.834494 | orchestrator | 2026-03-10 01:01:29.834498 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-10 01:01:29.834501 | orchestrator | Tuesday 10 March 2026 00:49:08 +0000 (0:00:00.743) 0:00:13.643 ********* 2026-03-10 01:01:29.834505 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.834509 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.834515 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.834539 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.834545 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.834550 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.834555 | orchestrator | 2026-03-10 01:01:29.834560 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-10 01:01:29.834565 | orchestrator | Tuesday 10 March 2026 00:49:10 +0000 (0:00:02.075) 0:00:15.719 ********* 2026-03-10 01:01:29.834570 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 01:01:29.834575 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:01:29.834579 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:01:29.834584 | orchestrator | 2026-03-10 01:01:29.834589 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-10 01:01:29.834593 | orchestrator | Tuesday 10 March 2026 00:49:13 +0000 (0:00:03.203) 0:00:18.922 ********* 2026-03-10 01:01:29.834599 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-10 01:01:29.834604 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-10 01:01:29.834608 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-10 01:01:29.834613 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.834618 | orchestrator | 2026-03-10 01:01:29.834623 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-10 01:01:29.834628 | orchestrator | Tuesday 10 March 2026 00:49:14 +0000 (0:00:01.095) 0:00:20.018 ********* 2026-03-10 01:01:29.834635 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.834642 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.834647 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.834651 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.834654 | orchestrator | 2026-03-10 01:01:29.834657 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-10 01:01:29.834702 | orchestrator | Tuesday 10 March 2026 00:49:15 +0000 (0:00:00.706) 0:00:20.725 ********* 2026-03-10 01:01:29.834716 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.834724 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.834729 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.834739 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.834743 | orchestrator | 2026-03-10 01:01:29.834746 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-10 01:01:29.834749 | orchestrator | Tuesday 10 March 2026 00:49:15 +0000 (0:00:00.367) 0:00:21.092 ********* 2026-03-10 01:01:29.834758 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-10 00:49:11.193973', 'end': '2026-03-10 00:49:11.303974', 'delta': '0:00:00.110001', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.834763 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-10 00:49:12.586130', 'end': '2026-03-10 00:49:12.704619', 'delta': '0:00:00.118489', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.834769 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-10 00:49:13.183851', 'end': '2026-03-10 00:49:13.291666', 'delta': '0:00:00.107815', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.834772 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.834775 | orchestrator | 2026-03-10 01:01:29.834778 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-10 01:01:29.834781 | orchestrator | Tuesday 10 March 2026 00:49:15 +0000 (0:00:00.229) 0:00:21.322 ********* 2026-03-10 01:01:29.834784 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.834788 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.834791 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.834794 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.834797 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.834800 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.834803 | orchestrator | 2026-03-10 01:01:29.834806 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-10 01:01:29.834809 | orchestrator | Tuesday 10 March 2026 00:49:17 +0000 (0:00:01.953) 0:00:23.276 ********* 2026-03-10 01:01:29.834812 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:01:29.834816 | orchestrator | 2026-03-10 01:01:29.834819 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-10 01:01:29.834822 | orchestrator | Tuesday 10 March 2026 00:49:18 +0000 (0:00:01.043) 0:00:24.319 ********* 2026-03-10 01:01:29.834825 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.834828 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.834832 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.834837 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.834840 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.834843 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.834846 | orchestrator | 2026-03-10 01:01:29.834850 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-10 01:01:29.834853 | orchestrator | Tuesday 10 March 2026 00:49:20 +0000 (0:00:01.263) 0:00:25.583 ********* 2026-03-10 01:01:29.834856 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.834859 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.834862 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.834865 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.834868 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.834872 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.834875 | orchestrator | 2026-03-10 01:01:29.834878 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-10 01:01:29.834881 | orchestrator | Tuesday 10 March 2026 00:49:23 +0000 (0:00:03.111) 0:00:28.694 ********* 2026-03-10 01:01:29.834884 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.834887 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.834890 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.834893 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.834896 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.834899 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.834902 | orchestrator | 2026-03-10 01:01:29.834906 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-10 01:01:29.834909 | orchestrator | Tuesday 10 March 2026 00:49:25 +0000 (0:00:02.272) 0:00:30.966 ********* 2026-03-10 01:01:29.834912 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.834915 | orchestrator | 2026-03-10 01:01:29.834918 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-10 01:01:29.834939 | orchestrator | Tuesday 10 March 2026 00:49:26 +0000 (0:00:00.659) 0:00:31.626 ********* 2026-03-10 01:01:29.834943 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.834947 | orchestrator | 2026-03-10 01:01:29.834950 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-10 01:01:29.834953 | orchestrator | Tuesday 10 March 2026 00:49:26 +0000 (0:00:00.570) 0:00:32.197 ********* 2026-03-10 01:01:29.834956 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.834959 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.834962 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.834978 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.834982 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.834985 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.834988 | orchestrator | 2026-03-10 01:01:29.835026 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-10 01:01:29.835031 | orchestrator | Tuesday 10 March 2026 00:49:28 +0000 (0:00:01.559) 0:00:33.757 ********* 2026-03-10 01:01:29.835034 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.835037 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.835040 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.835043 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.835047 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.835050 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.835053 | orchestrator | 2026-03-10 01:01:29.835056 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-10 01:01:29.835059 | orchestrator | Tuesday 10 March 2026 00:49:31 +0000 (0:00:03.419) 0:00:37.176 ********* 2026-03-10 01:01:29.835064 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.835070 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.835073 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.835076 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.835079 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.835082 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.835088 | orchestrator | 2026-03-10 01:01:29.835091 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-10 01:01:29.835096 | orchestrator | Tuesday 10 March 2026 00:49:33 +0000 (0:00:01.708) 0:00:38.885 ********* 2026-03-10 01:01:29.835099 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.835117 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.835121 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.835124 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.835127 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.835130 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.835133 | orchestrator | 2026-03-10 01:01:29.835136 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-10 01:01:29.835140 | orchestrator | Tuesday 10 March 2026 00:49:35 +0000 (0:00:02.279) 0:00:41.164 ********* 2026-03-10 01:01:29.835143 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.835146 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.835149 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.835152 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.835156 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.835159 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.835162 | orchestrator | 2026-03-10 01:01:29.835165 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-10 01:01:29.835168 | orchestrator | Tuesday 10 March 2026 00:49:37 +0000 (0:00:01.580) 0:00:42.745 ********* 2026-03-10 01:01:29.835171 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.835174 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.835177 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.835180 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.835183 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.835186 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.835189 | orchestrator | 2026-03-10 01:01:29.835205 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-10 01:01:29.835208 | orchestrator | Tuesday 10 March 2026 00:49:40 +0000 (0:00:02.859) 0:00:45.605 ********* 2026-03-10 01:01:29.835211 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.835215 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.835220 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.835225 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.835230 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.835234 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.835241 | orchestrator | 2026-03-10 01:01:29.835247 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-10 01:01:29.835252 | orchestrator | Tuesday 10 March 2026 00:49:41 +0000 (0:00:00.931) 0:00:46.537 ********* 2026-03-10 01:01:29.835258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df-osd--block--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df', 'dm-uuid-LVM-fg8D7lPuLf2SnuohSesegra2TySSTgsXKLUHoRdmUx1vIjgJIQf595TyFYvkACQi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e5abf04--63a5--5f41--bb2b--61caa92fdc91-osd--block--1e5abf04--63a5--5f41--bb2b--61caa92fdc91', 'dm-uuid-LVM-0LrrmFudB3mDRYcDT7ZzcT6hmoO3AZ7qv3BPRWdnHLmIEehPbOPsUUkqz5NluNBY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835283 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835291 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835296 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835301 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835306 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835312 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835317 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part1', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part14', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part15', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part16', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.835342 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df-osd--block--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tGGJVI-Kjsh-UAJd-ru60-SoR8-9teX-TdvgcC', 'scsi-0QEMU_QEMU_HARDDISK_8f76f090-a1e0-42c3-8072-1f51d4df9a8c', 'scsi-SQEMU_QEMU_HARDDISK_8f76f090-a1e0-42c3-8072-1f51d4df9a8c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.835348 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e-osd--block--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e', 'dm-uuid-LVM-crZZNUYAkiNGTnZUimsr43acDHrET7dTYRUkxsOmreHd8425IdJBYjuVWBlXVoKJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1e5abf04--63a5--5f41--bb2b--61caa92fdc91-osd--block--1e5abf04--63a5--5f41--bb2b--61caa92fdc91'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yua8fM-XwQN-51jl-eNOV-Qqrh-Xeao-CP3M9d', 'scsi-0QEMU_QEMU_HARDDISK_e4712c11-e6a0-4829-954c-3e21e73d266a', 'scsi-SQEMU_QEMU_HARDDISK_e4712c11-e6a0-4829-954c-3e21e73d266a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.835359 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a55caf6--84ae--542a--a466--02d3e6c6095e-osd--block--5a55caf6--84ae--542a--a466--02d3e6c6095e', 'dm-uuid-LVM-ww10THr3vAWs6YC2YLliCJBNkkdUNVlsx91VZ2PSKLbQmEw8FVxjqCv8vfg6Vd3v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835371 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b638158-044f-4e2c-a80d-2256f7b00733', 'scsi-SQEMU_QEMU_HARDDISK_5b638158-044f-4e2c-a80d-2256f7b00733'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.835381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.835386 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835392 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835403 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835408 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.835414 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835424 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835446 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--276dc5cf--0fff--57f4--b280--c3cda8556bee-osd--block--276dc5cf--0fff--57f4--b280--c3cda8556bee', 'dm-uuid-LVM-yi1gXmNOndbMseZbmXZIlMtCjradzf0QOPnrVXCTWCMoVR6dlw68AbG7U9XJCe9Q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1c4f45a1--f837--5281--b6b5--75662d68eedd-osd--block--1c4f45a1--f837--5281--b6b5--75662d68eedd', 'dm-uuid-LVM-JYeXdpyT69xd4mJwK8fftq9TFlsAtIjzxPozNSH0AeW9ePThwtiJHfCbXkcYanKl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part1', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part14', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part15', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part16', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.835483 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e-osd--block--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sWpfpg-U2qw-FqGq-Mxi9-RNNI-Wgzt-S0TXXF', 'scsi-0QEMU_QEMU_HARDDISK_b94fdc5f-2b9b-46a8-a60f-74e41f269a0d', 'scsi-SQEMU_QEMU_HARDDISK_b94fdc5f-2b9b-46a8-a60f-74e41f269a0d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.835954 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5a55caf6--84ae--542a--a466--02d3e6c6095e-osd--block--5a55caf6--84ae--542a--a466--02d3e6c6095e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jWTw2W-lrwH-PHTk-lRyE-lPFy-XJdm-7p63ov', 'scsi-0QEMU_QEMU_HARDDISK_32f512e5-1c04-4680-91d7-4268581c2350', 'scsi-SQEMU_QEMU_HARDDISK_32f512e5-1c04-4680-91d7-4268581c2350'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.835960 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835966 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21ab9d1e-083b-4748-865b-4e7341aec385', 'scsi-SQEMU_QEMU_HARDDISK_21ab9d1e-083b-4748-865b-4e7341aec385'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.835972 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.835983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835988 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.835998 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836004 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836012 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836017 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836024 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part1', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part14', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part15', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part16', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.836037 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--276dc5cf--0fff--57f4--b280--c3cda8556bee-osd--block--276dc5cf--0fff--57f4--b280--c3cda8556bee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P8ZAa1-SZlc-hZ12-Pgh0-jOFD-cm7l-qlcnR0', 'scsi-0QEMU_QEMU_HARDDISK_525599b5-6362-4aac-a0b3-94bd4cb39972', 'scsi-SQEMU_QEMU_HARDDISK_525599b5-6362-4aac-a0b3-94bd4cb39972'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.836045 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1c4f45a1--f837--5281--b6b5--75662d68eedd-osd--block--1c4f45a1--f837--5281--b6b5--75662d68eedd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Buht1o-r91x-2hlE-A6fu-XTGr-iGdr-0E5mC7', 'scsi-0QEMU_QEMU_HARDDISK_885a647d-e739-4ea9-ae01-9c2ce04d6822', 'scsi-SQEMU_QEMU_HARDDISK_885a647d-e739-4ea9-ae01-9c2ce04d6822'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.836051 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e39970d-8644-42a9-a13b-932f32b0237f', 'scsi-SQEMU_QEMU_HARDDISK_3e39970d-8644-42a9-a13b-932f32b0237f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.836056 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.836062 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.836068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836101 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836120 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836167 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4', 'scsi-SQEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part1', 'scsi-SQEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part14', 'scsi-SQEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part15', 'scsi-SQEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part16', 'scsi-SQEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.836177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.836182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836193 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b', 'scsi-SQEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.836199 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.836207 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.836212 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.836218 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.836223 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836255 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:01:29.836277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954', 'scsi-SQEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.836286 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:01:29.836291 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.836297 | orchestrator | 2026-03-10 01:01:29.836303 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-10 01:01:29.836308 | orchestrator | Tuesday 10 March 2026 00:49:44 +0000 (0:00:03.037) 0:00:49.574 ********* 2026-03-10 01:01:29.836340 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df-osd--block--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df', 'dm-uuid-LVM-fg8D7lPuLf2SnuohSesegra2TySSTgsXKLUHoRdmUx1vIjgJIQf595TyFYvkACQi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836347 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e5abf04--63a5--5f41--bb2b--61caa92fdc91-osd--block--1e5abf04--63a5--5f41--bb2b--61caa92fdc91', 'dm-uuid-LVM-0LrrmFudB3mDRYcDT7ZzcT6hmoO3AZ7qv3BPRWdnHLmIEehPbOPsUUkqz5NluNBY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836358 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836380 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836387 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836408 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836414 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836422 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836432 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836437 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e-osd--block--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e', 'dm-uuid-LVM-crZZNUYAkiNGTnZUimsr43acDHrET7dTYRUkxsOmreHd8425IdJBYjuVWBlXVoKJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836443 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836467 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a55caf6--84ae--542a--a466--02d3e6c6095e-osd--block--5a55caf6--84ae--542a--a466--02d3e6c6095e', 'dm-uuid-LVM-ww10THr3vAWs6YC2YLliCJBNkkdUNVlsx91VZ2PSKLbQmEw8FVxjqCv8vfg6Vd3v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836476 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part1', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part14', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part15', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part16', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836487 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836492 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836501 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df-osd--block--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tGGJVI-Kjsh-UAJd-ru60-SoR8-9teX-TdvgcC', 'scsi-0QEMU_QEMU_HARDDISK_8f76f090-a1e0-42c3-8072-1f51d4df9a8c', 'scsi-SQEMU_QEMU_HARDDISK_8f76f090-a1e0-42c3-8072-1f51d4df9a8c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836509 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836518 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1e5abf04--63a5--5f41--bb2b--61caa92fdc91-osd--block--1e5abf04--63a5--5f41--bb2b--61caa92fdc91'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yua8fM-XwQN-51jl-eNOV-Qqrh-Xeao-CP3M9d', 'scsi-0QEMU_QEMU_HARDDISK_e4712c11-e6a0-4829-954c-3e21e73d266a', 'scsi-SQEMU_QEMU_HARDDISK_e4712c11-e6a0-4829-954c-3e21e73d266a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836524 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836529 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836535 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836544 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836553 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836562 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836568 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836574 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836580 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b638158-044f-4e2c-a80d-2256f7b00733', 'scsi-SQEMU_QEMU_HARDDISK_5b638158-044f-4e2c-a80d-2256f7b00733'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836590 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836598 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836609 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b', 'scsi-SQEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part1', 'scsi-SQEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part14', 'scsi-SQEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part15', 'scsi-SQEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part16', 'scsi-SQEMU_QEMU_HARDDISK_ec09f1e5-cfe5-4632-85b1-1bf0bb88dd0b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836617 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-31-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836623 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.836632 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836716 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.836728 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836735 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836741 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836748 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836754 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836764 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--276dc5cf--0fff--57f4--b280--c3cda8556bee-osd--block--276dc5cf--0fff--57f4--b280--c3cda8556bee', 'dm-uuid-LVM-yi1gXmNOndbMseZbmXZIlMtCjradzf0QOPnrVXCTWCMoVR6dlw68AbG7U9XJCe9Q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836779 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836786 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836791 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836798 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836811 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4', 'scsi-SQEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part1', 'scsi-SQEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part14', 'scsi-SQEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part15', 'scsi-SQEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part16', 'scsi-SQEMU_QEMU_HARDDISK_b6c376d3-f3e9-4f38-b320-793235a9f6c4-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836822 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-24-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836828 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.836834 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836841 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1c4f45a1--f837--5281--b6b5--75662d68eedd-osd--block--1c4f45a1--f837--5281--b6b5--75662d68eedd', 'dm-uuid-LVM-JYeXdpyT69xd4mJwK8fftq9TFlsAtIjzxPozNSH0AeW9ePThwtiJHfCbXkcYanKl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836846 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836983 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836995 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.836999 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837002 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837006 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837010 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837016 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837053 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954', 'scsi-SQEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part1', 'scsi-SQEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part14', 'scsi-SQEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part15', 'scsi-SQEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part16', 'scsi-SQEMU_QEMU_HARDDISK_3d5f2fcb-5fbf-4e93-acf4-14417225e954-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837058 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837062 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.837065 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837073 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837080 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837084 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837088 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837095 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part1', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part14', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part15', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part16', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837104 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837108 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e-osd--block--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sWpfpg-U2qw-FqGq-Mxi9-RNNI-Wgzt-S0TXXF', 'scsi-0QEMU_QEMU_HARDDISK_b94fdc5f-2b9b-46a8-a60f-74e41f269a0d', 'scsi-SQEMU_QEMU_HARDDISK_b94fdc5f-2b9b-46a8-a60f-74e41f269a0d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837112 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837116 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5a55caf6--84ae--542a--a466--02d3e6c6095e-osd--block--5a55caf6--84ae--542a--a466--02d3e6c6095e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jWTw2W-lrwH-PHTk-lRyE-lPFy-XJdm-7p63ov', 'scsi-0QEMU_QEMU_HARDDISK_32f512e5-1c04-4680-91d7-4268581c2350', 'scsi-SQEMU_QEMU_HARDDISK_32f512e5-1c04-4680-91d7-4268581c2350'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837124 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837139 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21ab9d1e-083b-4748-865b-4e7341aec385', 'scsi-SQEMU_QEMU_HARDDISK_21ab9d1e-083b-4748-865b-4e7341aec385'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837145 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837148 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837152 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.837159 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part1', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part14', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part15', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part16', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837168 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--276dc5cf--0fff--57f4--b280--c3cda8556bee-osd--block--276dc5cf--0fff--57f4--b280--c3cda8556bee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P8ZAa1-SZlc-hZ12-Pgh0-jOFD-cm7l-qlcnR0', 'scsi-0QEMU_QEMU_HARDDISK_525599b5-6362-4aac-a0b3-94bd4cb39972', 'scsi-SQEMU_QEMU_HARDDISK_525599b5-6362-4aac-a0b3-94bd4cb39972'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837175 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1c4f45a1--f837--5281--b6b5--75662d68eedd-osd--block--1c4f45a1--f837--5281--b6b5--75662d68eedd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Buht1o-r91x-2hlE-A6fu-XTGr-iGdr-0E5mC7', 'scsi-0QEMU_QEMU_HARDDISK_885a647d-e739-4ea9-ae01-9c2ce04d6822', 'scsi-SQEMU_QEMU_HARDDISK_885a647d-e739-4ea9-ae01-9c2ce04d6822'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837179 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e39970d-8644-42a9-a13b-932f32b0237f', 'scsi-SQEMU_QEMU_HARDDISK_3e39970d-8644-42a9-a13b-932f32b0237f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837190 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:01:29.837194 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.837198 | orchestrator | 2026-03-10 01:01:29.837203 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-10 01:01:29.837207 | orchestrator | Tuesday 10 March 2026 00:49:46 +0000 (0:00:02.484) 0:00:52.059 ********* 2026-03-10 01:01:29.837211 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.837214 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.837218 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.837222 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.837225 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.837229 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.837232 | orchestrator | 2026-03-10 01:01:29.837236 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-10 01:01:29.837239 | orchestrator | Tuesday 10 March 2026 00:49:48 +0000 (0:00:01.873) 0:00:53.932 ********* 2026-03-10 01:01:29.837243 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.837246 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.837251 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.837256 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.837262 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.837269 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.837276 | orchestrator | 2026-03-10 01:01:29.837282 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-10 01:01:29.837287 | orchestrator | Tuesday 10 March 2026 00:49:49 +0000 (0:00:00.761) 0:00:54.694 ********* 2026-03-10 01:01:29.837292 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.837321 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.837327 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.837336 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.837341 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.837346 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.837351 | orchestrator | 2026-03-10 01:01:29.837355 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-10 01:01:29.837361 | orchestrator | Tuesday 10 March 2026 00:49:50 +0000 (0:00:00.999) 0:00:55.693 ********* 2026-03-10 01:01:29.837366 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.837372 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.837377 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.837382 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.837388 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.837394 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.837400 | orchestrator | 2026-03-10 01:01:29.837407 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-10 01:01:29.837412 | orchestrator | Tuesday 10 March 2026 00:49:51 +0000 (0:00:00.912) 0:00:56.605 ********* 2026-03-10 01:01:29.837418 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.837424 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.837513 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.837521 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.837527 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.837532 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.837543 | orchestrator | 2026-03-10 01:01:29.837549 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-10 01:01:29.837554 | orchestrator | Tuesday 10 March 2026 00:49:52 +0000 (0:00:01.224) 0:00:57.830 ********* 2026-03-10 01:01:29.837560 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.837565 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.837570 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.837575 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.837580 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.837585 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.837590 | orchestrator | 2026-03-10 01:01:29.837595 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-10 01:01:29.837601 | orchestrator | Tuesday 10 March 2026 00:49:53 +0000 (0:00:01.180) 0:00:59.011 ********* 2026-03-10 01:01:29.837606 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-10 01:01:29.837612 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-10 01:01:29.837618 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-10 01:01:29.837623 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-10 01:01:29.837629 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-10 01:01:29.837634 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-10 01:01:29.837640 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-10 01:01:29.837646 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-10 01:01:29.837652 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-10 01:01:29.837658 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-10 01:01:29.837663 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-10 01:01:29.837668 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-10 01:01:29.837674 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-10 01:01:29.837680 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-10 01:01:29.837687 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-10 01:01:29.837693 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-10 01:01:29.837699 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-10 01:01:29.837704 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-10 01:01:29.837710 | orchestrator | 2026-03-10 01:01:29.837718 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-10 01:01:29.837753 | orchestrator | Tuesday 10 March 2026 00:49:58 +0000 (0:00:04.927) 0:01:03.938 ********* 2026-03-10 01:01:29.837760 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-10 01:01:29.837766 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-10 01:01:29.837772 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-10 01:01:29.837778 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-10 01:01:29.837784 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-10 01:01:29.837804 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-10 01:01:29.837811 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-10 01:01:29.837817 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-10 01:01:29.837829 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-10 01:01:29.837835 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-10 01:01:29.837841 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-10 01:01:29.837847 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-10 01:01:29.837853 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-10 01:01:29.837859 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-10 01:01:29.837864 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-10 01:01:29.837874 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.837879 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.837884 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.837889 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.837895 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.837900 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-10 01:01:29.837905 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-10 01:01:29.837928 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-10 01:01:29.837934 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.837939 | orchestrator | 2026-03-10 01:01:29.837945 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-10 01:01:29.837955 | orchestrator | Tuesday 10 March 2026 00:49:59 +0000 (0:00:01.152) 0:01:05.091 ********* 2026-03-10 01:01:29.837960 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.837966 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.837972 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.837978 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.837983 | orchestrator | 2026-03-10 01:01:29.837988 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-10 01:01:29.837995 | orchestrator | Tuesday 10 March 2026 00:50:00 +0000 (0:00:01.254) 0:01:06.345 ********* 2026-03-10 01:01:29.838000 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.838006 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.838012 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.838351 | orchestrator | 2026-03-10 01:01:29.838360 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-10 01:01:29.838365 | orchestrator | Tuesday 10 March 2026 00:50:01 +0000 (0:00:00.373) 0:01:06.718 ********* 2026-03-10 01:01:29.838371 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.838376 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.838381 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.838387 | orchestrator | 2026-03-10 01:01:29.838393 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-10 01:01:29.838398 | orchestrator | Tuesday 10 March 2026 00:50:01 +0000 (0:00:00.491) 0:01:07.210 ********* 2026-03-10 01:01:29.838404 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.838409 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.838415 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.838420 | orchestrator | 2026-03-10 01:01:29.838426 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-10 01:01:29.838431 | orchestrator | Tuesday 10 March 2026 00:50:02 +0000 (0:00:00.991) 0:01:08.201 ********* 2026-03-10 01:01:29.838436 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.838442 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.838457 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.838463 | orchestrator | 2026-03-10 01:01:29.838468 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-10 01:01:29.838473 | orchestrator | Tuesday 10 March 2026 00:50:03 +0000 (0:00:00.780) 0:01:08.982 ********* 2026-03-10 01:01:29.838478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.838483 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.838488 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.838494 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.838499 | orchestrator | 2026-03-10 01:01:29.838504 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-10 01:01:29.838509 | orchestrator | Tuesday 10 March 2026 00:50:04 +0000 (0:00:00.504) 0:01:09.487 ********* 2026-03-10 01:01:29.838514 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.838525 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.838530 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.838535 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.838540 | orchestrator | 2026-03-10 01:01:29.838545 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-10 01:01:29.838550 | orchestrator | Tuesday 10 March 2026 00:50:04 +0000 (0:00:00.961) 0:01:10.448 ********* 2026-03-10 01:01:29.838556 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.838561 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.838566 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.838571 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.838576 | orchestrator | 2026-03-10 01:01:29.838581 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-10 01:01:29.838587 | orchestrator | Tuesday 10 March 2026 00:50:05 +0000 (0:00:00.711) 0:01:11.160 ********* 2026-03-10 01:01:29.838592 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.838597 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.838602 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.838606 | orchestrator | 2026-03-10 01:01:29.838611 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-10 01:01:29.838616 | orchestrator | Tuesday 10 March 2026 00:50:06 +0000 (0:00:00.986) 0:01:12.146 ********* 2026-03-10 01:01:29.838621 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-10 01:01:29.838626 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-10 01:01:29.838641 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-10 01:01:29.838647 | orchestrator | 2026-03-10 01:01:29.838653 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-10 01:01:29.838659 | orchestrator | Tuesday 10 March 2026 00:50:08 +0000 (0:00:01.375) 0:01:13.522 ********* 2026-03-10 01:01:29.838664 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 01:01:29.838669 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:01:29.838674 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:01:29.838680 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-10 01:01:29.838685 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-10 01:01:29.838690 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-10 01:01:29.838696 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-10 01:01:29.838702 | orchestrator | 2026-03-10 01:01:29.838708 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-10 01:01:29.838718 | orchestrator | Tuesday 10 March 2026 00:50:09 +0000 (0:00:00.982) 0:01:14.504 ********* 2026-03-10 01:01:29.838722 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 01:01:29.838727 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:01:29.838732 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:01:29.838738 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-10 01:01:29.838743 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-10 01:01:29.838749 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-10 01:01:29.838754 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-10 01:01:29.838759 | orchestrator | 2026-03-10 01:01:29.838764 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 01:01:29.838769 | orchestrator | Tuesday 10 March 2026 00:50:11 +0000 (0:00:02.379) 0:01:16.884 ********* 2026-03-10 01:01:29.838781 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.838787 | orchestrator | 2026-03-10 01:01:29.838792 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 01:01:29.838797 | orchestrator | Tuesday 10 March 2026 00:50:12 +0000 (0:00:01.209) 0:01:18.094 ********* 2026-03-10 01:01:29.838802 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.838806 | orchestrator | 2026-03-10 01:01:29.838811 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 01:01:29.838816 | orchestrator | Tuesday 10 March 2026 00:50:13 +0000 (0:00:01.197) 0:01:19.292 ********* 2026-03-10 01:01:29.838821 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.838826 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.838831 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.838836 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.838841 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.838846 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.838852 | orchestrator | 2026-03-10 01:01:29.838858 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 01:01:29.838864 | orchestrator | Tuesday 10 March 2026 00:50:15 +0000 (0:00:01.529) 0:01:20.822 ********* 2026-03-10 01:01:29.838869 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.838875 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.838880 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.838885 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.838890 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.838894 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.838900 | orchestrator | 2026-03-10 01:01:29.838905 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 01:01:29.838911 | orchestrator | Tuesday 10 March 2026 00:50:16 +0000 (0:00:01.018) 0:01:21.840 ********* 2026-03-10 01:01:29.838916 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.838922 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.838927 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.838932 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.838937 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.838942 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.838948 | orchestrator | 2026-03-10 01:01:29.838953 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 01:01:29.838958 | orchestrator | Tuesday 10 March 2026 00:50:17 +0000 (0:00:00.971) 0:01:22.811 ********* 2026-03-10 01:01:29.838963 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.838968 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.838974 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.838979 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.838984 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.838990 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.838995 | orchestrator | 2026-03-10 01:01:29.839000 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 01:01:29.839005 | orchestrator | Tuesday 10 March 2026 00:50:18 +0000 (0:00:00.759) 0:01:23.571 ********* 2026-03-10 01:01:29.839010 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839016 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.839021 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.839026 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.839031 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.839043 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.839049 | orchestrator | 2026-03-10 01:01:29.839054 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 01:01:29.839060 | orchestrator | Tuesday 10 March 2026 00:50:19 +0000 (0:00:01.642) 0:01:25.214 ********* 2026-03-10 01:01:29.839071 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839076 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.839081 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.839086 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.839091 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.839096 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.839102 | orchestrator | 2026-03-10 01:01:29.839107 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 01:01:29.839112 | orchestrator | Tuesday 10 March 2026 00:50:20 +0000 (0:00:00.857) 0:01:26.071 ********* 2026-03-10 01:01:29.839118 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839123 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.839128 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.839133 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.839138 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.839144 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.839149 | orchestrator | 2026-03-10 01:01:29.839154 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 01:01:29.839163 | orchestrator | Tuesday 10 March 2026 00:50:21 +0000 (0:00:00.771) 0:01:26.843 ********* 2026-03-10 01:01:29.839169 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.839175 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.839180 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.839185 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.839190 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.839195 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.839201 | orchestrator | 2026-03-10 01:01:29.839206 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 01:01:29.839212 | orchestrator | Tuesday 10 March 2026 00:50:22 +0000 (0:00:01.230) 0:01:28.074 ********* 2026-03-10 01:01:29.839218 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.839222 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.839228 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.839233 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.839238 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.839243 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.839246 | orchestrator | 2026-03-10 01:01:29.839249 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 01:01:29.839252 | orchestrator | Tuesday 10 March 2026 00:50:24 +0000 (0:00:02.139) 0:01:30.213 ********* 2026-03-10 01:01:29.839255 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839258 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.839261 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.839264 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.839268 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.839271 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.839274 | orchestrator | 2026-03-10 01:01:29.839277 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 01:01:29.839280 | orchestrator | Tuesday 10 March 2026 00:50:26 +0000 (0:00:01.549) 0:01:31.762 ********* 2026-03-10 01:01:29.839283 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839286 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.839289 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.839292 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.839295 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.839298 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.839302 | orchestrator | 2026-03-10 01:01:29.839305 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 01:01:29.839308 | orchestrator | Tuesday 10 March 2026 00:50:27 +0000 (0:00:01.227) 0:01:32.990 ********* 2026-03-10 01:01:29.839311 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.839314 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.839317 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.839320 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.839323 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.839330 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.839333 | orchestrator | 2026-03-10 01:01:29.839336 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 01:01:29.839340 | orchestrator | Tuesday 10 March 2026 00:50:28 +0000 (0:00:00.833) 0:01:33.823 ********* 2026-03-10 01:01:29.839343 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.839346 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.839349 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.839352 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.839355 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.839358 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.839361 | orchestrator | 2026-03-10 01:01:29.839364 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 01:01:29.839368 | orchestrator | Tuesday 10 March 2026 00:50:29 +0000 (0:00:00.992) 0:01:34.815 ********* 2026-03-10 01:01:29.839371 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.839374 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.839377 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.839380 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.839383 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.839386 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.839389 | orchestrator | 2026-03-10 01:01:29.839392 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 01:01:29.839396 | orchestrator | Tuesday 10 March 2026 00:50:30 +0000 (0:00:00.935) 0:01:35.751 ********* 2026-03-10 01:01:29.839399 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839402 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.839405 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.839408 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.839411 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.839414 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.839417 | orchestrator | 2026-03-10 01:01:29.839420 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 01:01:29.839423 | orchestrator | Tuesday 10 March 2026 00:50:32 +0000 (0:00:02.033) 0:01:37.784 ********* 2026-03-10 01:01:29.839426 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839429 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.839432 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.839435 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.839442 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.839445 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.839463 | orchestrator | 2026-03-10 01:01:29.839469 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 01:01:29.839474 | orchestrator | Tuesday 10 March 2026 00:50:33 +0000 (0:00:01.046) 0:01:38.831 ********* 2026-03-10 01:01:29.839479 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839482 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.839485 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.839488 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.839491 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.839494 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.839497 | orchestrator | 2026-03-10 01:01:29.839501 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 01:01:29.839504 | orchestrator | Tuesday 10 March 2026 00:50:35 +0000 (0:00:01.785) 0:01:40.617 ********* 2026-03-10 01:01:29.839507 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.839510 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.839513 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.839516 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.839519 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.839522 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.839525 | orchestrator | 2026-03-10 01:01:29.839528 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 01:01:29.839534 | orchestrator | Tuesday 10 March 2026 00:50:36 +0000 (0:00:01.131) 0:01:41.749 ********* 2026-03-10 01:01:29.839539 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.839543 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.839546 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.839549 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.839552 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.839555 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.839558 | orchestrator | 2026-03-10 01:01:29.839561 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-10 01:01:29.839564 | orchestrator | Tuesday 10 March 2026 00:50:38 +0000 (0:00:01.797) 0:01:43.546 ********* 2026-03-10 01:01:29.839567 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.839570 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.839573 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.839577 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.839582 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.839587 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.839591 | orchestrator | 2026-03-10 01:01:29.839596 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-10 01:01:29.839602 | orchestrator | Tuesday 10 March 2026 00:50:40 +0000 (0:00:02.561) 0:01:46.108 ********* 2026-03-10 01:01:29.839607 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.839611 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.839616 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.839620 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.839625 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.839630 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.839635 | orchestrator | 2026-03-10 01:01:29.839640 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-10 01:01:29.839645 | orchestrator | Tuesday 10 March 2026 00:50:45 +0000 (0:00:04.410) 0:01:50.518 ********* 2026-03-10 01:01:29.839651 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.839656 | orchestrator | 2026-03-10 01:01:29.839661 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-10 01:01:29.839666 | orchestrator | Tuesday 10 March 2026 00:50:47 +0000 (0:00:02.275) 0:01:52.793 ********* 2026-03-10 01:01:29.839671 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839676 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.839681 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.839687 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.839692 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.839697 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.839703 | orchestrator | 2026-03-10 01:01:29.839709 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-10 01:01:29.839733 | orchestrator | Tuesday 10 March 2026 00:50:48 +0000 (0:00:01.199) 0:01:53.993 ********* 2026-03-10 01:01:29.839737 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839741 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.839744 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.839747 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.839750 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.839753 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.839756 | orchestrator | 2026-03-10 01:01:29.839759 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-10 01:01:29.839762 | orchestrator | Tuesday 10 March 2026 00:50:51 +0000 (0:00:02.907) 0:01:56.900 ********* 2026-03-10 01:01:29.839765 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-10 01:01:29.839768 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-10 01:01:29.839771 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-10 01:01:29.839779 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-10 01:01:29.839782 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-10 01:01:29.839785 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-10 01:01:29.839789 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-10 01:01:29.839792 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-10 01:01:29.839795 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-10 01:01:29.839798 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-10 01:01:29.839805 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-10 01:01:29.839808 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-10 01:01:29.839811 | orchestrator | 2026-03-10 01:01:29.839814 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-10 01:01:29.839817 | orchestrator | Tuesday 10 March 2026 00:50:54 +0000 (0:00:02.649) 0:01:59.550 ********* 2026-03-10 01:01:29.839820 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.839823 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.839827 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.839830 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.839833 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.839836 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.839839 | orchestrator | 2026-03-10 01:01:29.839842 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-10 01:01:29.839845 | orchestrator | Tuesday 10 March 2026 00:50:55 +0000 (0:00:01.275) 0:02:00.825 ********* 2026-03-10 01:01:29.839848 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839851 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.839854 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.839857 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.839860 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.839863 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.839866 | orchestrator | 2026-03-10 01:01:29.839873 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-10 01:01:29.839876 | orchestrator | Tuesday 10 March 2026 00:50:56 +0000 (0:00:00.706) 0:02:01.532 ********* 2026-03-10 01:01:29.839879 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839882 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.839885 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.839888 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.839891 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.839894 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.839897 | orchestrator | 2026-03-10 01:01:29.839900 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-10 01:01:29.839903 | orchestrator | Tuesday 10 March 2026 00:50:56 +0000 (0:00:00.848) 0:02:02.381 ********* 2026-03-10 01:01:29.839906 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839909 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.839912 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.839915 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.839918 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.839922 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.839925 | orchestrator | 2026-03-10 01:01:29.839928 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-10 01:01:29.839931 | orchestrator | Tuesday 10 March 2026 00:50:57 +0000 (0:00:00.558) 0:02:02.939 ********* 2026-03-10 01:01:29.839934 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.839942 | orchestrator | 2026-03-10 01:01:29.839945 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-10 01:01:29.839948 | orchestrator | Tuesday 10 March 2026 00:50:58 +0000 (0:00:01.177) 0:02:04.117 ********* 2026-03-10 01:01:29.839951 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.839954 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.839957 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.839960 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.839963 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.839966 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.839969 | orchestrator | 2026-03-10 01:01:29.839973 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-10 01:01:29.839976 | orchestrator | Tuesday 10 March 2026 00:51:45 +0000 (0:00:47.290) 0:02:51.407 ********* 2026-03-10 01:01:29.839979 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-10 01:01:29.839982 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-10 01:01:29.839985 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-10 01:01:29.839988 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.839991 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-10 01:01:29.839994 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-10 01:01:29.839997 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-10 01:01:29.840000 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.840004 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-10 01:01:29.840007 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-10 01:01:29.840010 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-10 01:01:29.840013 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.840016 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-10 01:01:29.840019 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-10 01:01:29.840022 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-10 01:01:29.840025 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.840028 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-10 01:01:29.840031 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-10 01:01:29.840035 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-10 01:01:29.840038 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.840043 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-10 01:01:29.840046 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-10 01:01:29.840049 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-10 01:01:29.840052 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.840056 | orchestrator | 2026-03-10 01:01:29.840059 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-10 01:01:29.840062 | orchestrator | Tuesday 10 March 2026 00:51:46 +0000 (0:00:01.010) 0:02:52.418 ********* 2026-03-10 01:01:29.840065 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.840068 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.840071 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.840074 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.840077 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.840080 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.840083 | orchestrator | 2026-03-10 01:01:29.840086 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-10 01:01:29.840092 | orchestrator | Tuesday 10 March 2026 00:51:48 +0000 (0:00:01.230) 0:02:53.648 ********* 2026-03-10 01:01:29.840095 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.840098 | orchestrator | 2026-03-10 01:01:29.840101 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-10 01:01:29.840105 | orchestrator | Tuesday 10 March 2026 00:51:48 +0000 (0:00:00.196) 0:02:53.844 ********* 2026-03-10 01:01:29.840108 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.840111 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.840114 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.840117 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.840120 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.840123 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.840126 | orchestrator | 2026-03-10 01:01:29.840129 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-10 01:01:29.840132 | orchestrator | Tuesday 10 March 2026 00:51:49 +0000 (0:00:00.998) 0:02:54.843 ********* 2026-03-10 01:01:29.840135 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.840138 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.840141 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.840144 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.840147 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.840150 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.840154 | orchestrator | 2026-03-10 01:01:29.840160 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-10 01:01:29.840163 | orchestrator | Tuesday 10 March 2026 00:51:50 +0000 (0:00:01.238) 0:02:56.082 ********* 2026-03-10 01:01:29.840166 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.840169 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.840172 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.840175 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.840178 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.840181 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.840184 | orchestrator | 2026-03-10 01:01:29.840187 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-10 01:01:29.840190 | orchestrator | Tuesday 10 March 2026 00:51:51 +0000 (0:00:00.935) 0:02:57.017 ********* 2026-03-10 01:01:29.840193 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.840196 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.840199 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.840202 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.840205 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.840208 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.840211 | orchestrator | 2026-03-10 01:01:29.840215 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-10 01:01:29.840218 | orchestrator | Tuesday 10 March 2026 00:51:54 +0000 (0:00:03.206) 0:03:00.224 ********* 2026-03-10 01:01:29.840221 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.840224 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.840227 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.840230 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.840233 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.840236 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.840239 | orchestrator | 2026-03-10 01:01:29.840242 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-10 01:01:29.840245 | orchestrator | Tuesday 10 March 2026 00:51:55 +0000 (0:00:00.915) 0:03:01.139 ********* 2026-03-10 01:01:29.840248 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.840252 | orchestrator | 2026-03-10 01:01:29.840255 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-10 01:01:29.840271 | orchestrator | Tuesday 10 March 2026 00:51:57 +0000 (0:00:01.899) 0:03:03.038 ********* 2026-03-10 01:01:29.840277 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.840286 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.840292 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.840323 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.840327 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.840330 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.840333 | orchestrator | 2026-03-10 01:01:29.840336 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-10 01:01:29.840339 | orchestrator | Tuesday 10 March 2026 00:51:58 +0000 (0:00:01.340) 0:03:04.378 ********* 2026-03-10 01:01:29.840342 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.840345 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.840348 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.840351 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.840354 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.840357 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.840360 | orchestrator | 2026-03-10 01:01:29.840364 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-10 01:01:29.840367 | orchestrator | Tuesday 10 March 2026 00:52:00 +0000 (0:00:01.325) 0:03:05.704 ********* 2026-03-10 01:01:29.840373 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.840377 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.840386 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.840391 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.840397 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.840402 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.840407 | orchestrator | 2026-03-10 01:01:29.840413 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-10 01:01:29.840416 | orchestrator | Tuesday 10 March 2026 00:52:01 +0000 (0:00:01.512) 0:03:07.217 ********* 2026-03-10 01:01:29.840419 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.840422 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.840425 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.840428 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.840431 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.840434 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.840437 | orchestrator | 2026-03-10 01:01:29.840441 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-10 01:01:29.840444 | orchestrator | Tuesday 10 March 2026 00:52:03 +0000 (0:00:01.708) 0:03:08.925 ********* 2026-03-10 01:01:29.840474 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.840482 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.840487 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.840493 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.840498 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.840502 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.840505 | orchestrator | 2026-03-10 01:01:29.840510 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-10 01:01:29.840513 | orchestrator | Tuesday 10 March 2026 00:52:04 +0000 (0:00:01.343) 0:03:10.269 ********* 2026-03-10 01:01:29.840516 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.840519 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.840523 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.840526 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.840531 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.840537 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.840544 | orchestrator | 2026-03-10 01:01:29.840551 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-10 01:01:29.840558 | orchestrator | Tuesday 10 March 2026 00:52:05 +0000 (0:00:01.191) 0:03:11.460 ********* 2026-03-10 01:01:29.840564 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.840569 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.840574 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.840583 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.840588 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.840593 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.840599 | orchestrator | 2026-03-10 01:01:29.840604 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-10 01:01:29.840609 | orchestrator | Tuesday 10 March 2026 00:52:07 +0000 (0:00:01.981) 0:03:13.442 ********* 2026-03-10 01:01:29.840615 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.840620 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.840625 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.840631 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.840636 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.840641 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.840647 | orchestrator | 2026-03-10 01:01:29.840652 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-10 01:01:29.840658 | orchestrator | Tuesday 10 March 2026 00:52:09 +0000 (0:00:01.227) 0:03:14.670 ********* 2026-03-10 01:01:29.840663 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.840669 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.840675 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.840680 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.840686 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.840691 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.840697 | orchestrator | 2026-03-10 01:01:29.840703 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-10 01:01:29.840708 | orchestrator | Tuesday 10 March 2026 00:52:12 +0000 (0:00:02.879) 0:03:17.550 ********* 2026-03-10 01:01:29.840713 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.840718 | orchestrator | 2026-03-10 01:01:29.840723 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-10 01:01:29.840728 | orchestrator | Tuesday 10 March 2026 00:52:13 +0000 (0:00:01.582) 0:03:19.132 ********* 2026-03-10 01:01:29.840734 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-10 01:01:29.840740 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-10 01:01:29.840745 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-10 01:01:29.840750 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-10 01:01:29.840757 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-10 01:01:29.840762 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-10 01:01:29.840767 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-10 01:01:29.840771 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-10 01:01:29.840776 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-10 01:01:29.840781 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-10 01:01:29.840785 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-10 01:01:29.840790 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-10 01:01:29.840795 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-10 01:01:29.840800 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-10 01:01:29.840804 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-10 01:01:29.840809 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-10 01:01:29.840814 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-10 01:01:29.840819 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-10 01:01:29.840829 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-10 01:01:29.840834 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-10 01:01:29.840839 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-10 01:01:29.840847 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-10 01:01:29.840850 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-10 01:01:29.840853 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-10 01:01:29.840857 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-10 01:01:29.840860 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-10 01:01:29.840863 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-10 01:01:29.840866 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-10 01:01:29.840869 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-10 01:01:29.840872 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-10 01:01:29.840875 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-10 01:01:29.840878 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-10 01:01:29.840881 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-10 01:01:29.840887 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-10 01:01:29.840890 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-10 01:01:29.840893 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-10 01:01:29.840896 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-10 01:01:29.840899 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-10 01:01:29.840903 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-10 01:01:29.840906 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-10 01:01:29.840909 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-10 01:01:29.840912 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-10 01:01:29.840915 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-10 01:01:29.840918 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-10 01:01:29.840921 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-10 01:01:29.840924 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-10 01:01:29.840927 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-10 01:01:29.840930 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-10 01:01:29.840933 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-10 01:01:29.840936 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-10 01:01:29.840939 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-10 01:01:29.840942 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-10 01:01:29.840946 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-10 01:01:29.840949 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-10 01:01:29.840952 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-10 01:01:29.840955 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-10 01:01:29.840958 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-10 01:01:29.840961 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-10 01:01:29.840966 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-10 01:01:29.840971 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-10 01:01:29.840977 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-10 01:01:29.840981 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-10 01:01:29.840986 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-10 01:01:29.840996 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-10 01:01:29.841002 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-10 01:01:29.841007 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-10 01:01:29.841013 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-10 01:01:29.841017 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-10 01:01:29.841020 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-10 01:01:29.841023 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-10 01:01:29.841026 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-10 01:01:29.841029 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-10 01:01:29.841032 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-10 01:01:29.841036 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-10 01:01:29.841039 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-10 01:01:29.841042 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-10 01:01:29.841051 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-10 01:01:29.841054 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-10 01:01:29.841057 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-10 01:01:29.841060 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-10 01:01:29.841063 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-10 01:01:29.841066 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-10 01:01:29.841069 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-10 01:01:29.841073 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-10 01:01:29.841076 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-10 01:01:29.841079 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-10 01:01:29.841082 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-10 01:01:29.841085 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-10 01:01:29.841088 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-10 01:01:29.841091 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-10 01:01:29.841096 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-10 01:01:29.841099 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-10 01:01:29.841102 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-10 01:01:29.841105 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-10 01:01:29.841108 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-10 01:01:29.841111 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-10 01:01:29.841114 | orchestrator | 2026-03-10 01:01:29.841118 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-10 01:01:29.841121 | orchestrator | Tuesday 10 March 2026 00:52:20 +0000 (0:00:06.871) 0:03:26.004 ********* 2026-03-10 01:01:29.841124 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841127 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841130 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841136 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.841141 | orchestrator | 2026-03-10 01:01:29.841147 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-10 01:01:29.841153 | orchestrator | Tuesday 10 March 2026 00:52:22 +0000 (0:00:01.514) 0:03:27.518 ********* 2026-03-10 01:01:29.841156 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.841160 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.841163 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.841166 | orchestrator | 2026-03-10 01:01:29.841169 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-10 01:01:29.841172 | orchestrator | Tuesday 10 March 2026 00:52:23 +0000 (0:00:01.690) 0:03:29.209 ********* 2026-03-10 01:01:29.841175 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.841178 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.841181 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.841184 | orchestrator | 2026-03-10 01:01:29.841188 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-10 01:01:29.841191 | orchestrator | Tuesday 10 March 2026 00:52:25 +0000 (0:00:01.747) 0:03:30.957 ********* 2026-03-10 01:01:29.841194 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.841197 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.841200 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.841203 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841206 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841209 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841212 | orchestrator | 2026-03-10 01:01:29.841215 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-10 01:01:29.841218 | orchestrator | Tuesday 10 March 2026 00:52:26 +0000 (0:00:01.184) 0:03:32.142 ********* 2026-03-10 01:01:29.841221 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.841225 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.841230 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.841235 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841241 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841246 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841252 | orchestrator | 2026-03-10 01:01:29.841257 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-10 01:01:29.841264 | orchestrator | Tuesday 10 March 2026 00:52:28 +0000 (0:00:01.561) 0:03:33.704 ********* 2026-03-10 01:01:29.841272 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.841277 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.841282 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.841287 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841292 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841297 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841302 | orchestrator | 2026-03-10 01:01:29.841310 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-10 01:01:29.841315 | orchestrator | Tuesday 10 March 2026 00:52:28 +0000 (0:00:00.681) 0:03:34.385 ********* 2026-03-10 01:01:29.841319 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.841324 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.841330 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.841334 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841340 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841346 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841351 | orchestrator | 2026-03-10 01:01:29.841356 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-10 01:01:29.841366 | orchestrator | Tuesday 10 March 2026 00:52:29 +0000 (0:00:01.071) 0:03:35.457 ********* 2026-03-10 01:01:29.841369 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.841372 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.841375 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.841378 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841381 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841385 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841388 | orchestrator | 2026-03-10 01:01:29.841391 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-10 01:01:29.841394 | orchestrator | Tuesday 10 March 2026 00:52:31 +0000 (0:00:01.070) 0:03:36.527 ********* 2026-03-10 01:01:29.841397 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.841403 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.841406 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.841409 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841412 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841415 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841419 | orchestrator | 2026-03-10 01:01:29.841424 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-10 01:01:29.841429 | orchestrator | Tuesday 10 March 2026 00:52:32 +0000 (0:00:01.348) 0:03:37.876 ********* 2026-03-10 01:01:29.841435 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.841440 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.841445 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.841462 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841465 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841468 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841471 | orchestrator | 2026-03-10 01:01:29.841475 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-10 01:01:29.841478 | orchestrator | Tuesday 10 March 2026 00:52:33 +0000 (0:00:00.931) 0:03:38.807 ********* 2026-03-10 01:01:29.841481 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.841484 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.841487 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.841490 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841493 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841529 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841535 | orchestrator | 2026-03-10 01:01:29.841540 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-10 01:01:29.841545 | orchestrator | Tuesday 10 March 2026 00:52:34 +0000 (0:00:01.063) 0:03:39.872 ********* 2026-03-10 01:01:29.841551 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841556 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841562 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841568 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.841574 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.841579 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.841584 | orchestrator | 2026-03-10 01:01:29.841590 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-10 01:01:29.841595 | orchestrator | Tuesday 10 March 2026 00:52:37 +0000 (0:00:02.956) 0:03:42.829 ********* 2026-03-10 01:01:29.841602 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.841608 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.841614 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.841619 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841625 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841631 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841634 | orchestrator | 2026-03-10 01:01:29.841638 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-10 01:01:29.841641 | orchestrator | Tuesday 10 March 2026 00:52:38 +0000 (0:00:01.360) 0:03:44.189 ********* 2026-03-10 01:01:29.841647 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.841654 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.841657 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.841660 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841663 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841666 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841669 | orchestrator | 2026-03-10 01:01:29.841672 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-10 01:01:29.841675 | orchestrator | Tuesday 10 March 2026 00:52:39 +0000 (0:00:01.230) 0:03:45.419 ********* 2026-03-10 01:01:29.841678 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.841681 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.841684 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.841687 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841690 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841693 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841696 | orchestrator | 2026-03-10 01:01:29.841699 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-10 01:01:29.841702 | orchestrator | Tuesday 10 March 2026 00:52:41 +0000 (0:00:01.489) 0:03:46.909 ********* 2026-03-10 01:01:29.841706 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.841709 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.841712 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.841715 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841722 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841725 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841728 | orchestrator | 2026-03-10 01:01:29.841731 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-10 01:01:29.841734 | orchestrator | Tuesday 10 March 2026 00:52:42 +0000 (0:00:01.000) 0:03:47.909 ********* 2026-03-10 01:01:29.841739 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-10 01:01:29.841743 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-10 01:01:29.841747 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.841753 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-10 01:01:29.841756 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-10 01:01:29.841759 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.841763 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-10 01:01:29.841766 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-10 01:01:29.841774 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.841780 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841784 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841789 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841792 | orchestrator | 2026-03-10 01:01:29.841795 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-10 01:01:29.841798 | orchestrator | Tuesday 10 March 2026 00:52:43 +0000 (0:00:01.265) 0:03:49.175 ********* 2026-03-10 01:01:29.841801 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.841804 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.841807 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.841810 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841813 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841816 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841819 | orchestrator | 2026-03-10 01:01:29.841822 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-10 01:01:29.841825 | orchestrator | Tuesday 10 March 2026 00:52:44 +0000 (0:00:00.678) 0:03:49.854 ********* 2026-03-10 01:01:29.841828 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.841831 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.841834 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.841838 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841843 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841851 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841856 | orchestrator | 2026-03-10 01:01:29.841861 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-10 01:01:29.841866 | orchestrator | Tuesday 10 March 2026 00:52:45 +0000 (0:00:01.085) 0:03:50.939 ********* 2026-03-10 01:01:29.841871 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.841876 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.841880 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.841885 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841891 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841897 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841902 | orchestrator | 2026-03-10 01:01:29.841908 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-10 01:01:29.841913 | orchestrator | Tuesday 10 March 2026 00:52:46 +0000 (0:00:00.762) 0:03:51.701 ********* 2026-03-10 01:01:29.841918 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.841924 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.841929 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.841933 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841937 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841942 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841947 | orchestrator | 2026-03-10 01:01:29.841952 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-10 01:01:29.841959 | orchestrator | Tuesday 10 March 2026 00:52:47 +0000 (0:00:01.060) 0:03:52.762 ********* 2026-03-10 01:01:29.841964 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.841969 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.841973 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.841978 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.841983 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.841987 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.841992 | orchestrator | 2026-03-10 01:01:29.841997 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-10 01:01:29.842001 | orchestrator | Tuesday 10 March 2026 00:52:48 +0000 (0:00:01.305) 0:03:54.067 ********* 2026-03-10 01:01:29.842010 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.842042 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.842047 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.842053 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.842058 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.842063 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.842068 | orchestrator | 2026-03-10 01:01:29.842074 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-10 01:01:29.842078 | orchestrator | Tuesday 10 March 2026 00:52:49 +0000 (0:00:01.194) 0:03:55.261 ********* 2026-03-10 01:01:29.842083 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.842089 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.842096 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.842101 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842106 | orchestrator | 2026-03-10 01:01:29.842110 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-10 01:01:29.842115 | orchestrator | Tuesday 10 March 2026 00:52:50 +0000 (0:00:00.434) 0:03:55.696 ********* 2026-03-10 01:01:29.842119 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.842124 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.842128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.842133 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842137 | orchestrator | 2026-03-10 01:01:29.842142 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-10 01:01:29.842146 | orchestrator | Tuesday 10 March 2026 00:52:50 +0000 (0:00:00.496) 0:03:56.193 ********* 2026-03-10 01:01:29.842151 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.842155 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.842160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.842165 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842171 | orchestrator | 2026-03-10 01:01:29.842176 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-10 01:01:29.842182 | orchestrator | Tuesday 10 March 2026 00:52:51 +0000 (0:00:00.468) 0:03:56.661 ********* 2026-03-10 01:01:29.842186 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.842191 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.842195 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.842200 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.842204 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.842209 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.842214 | orchestrator | 2026-03-10 01:01:29.842219 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-10 01:01:29.842224 | orchestrator | Tuesday 10 March 2026 00:52:51 +0000 (0:00:00.729) 0:03:57.391 ********* 2026-03-10 01:01:29.842229 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-10 01:01:29.842235 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-10 01:01:29.842239 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-10 01:01:29.842244 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.842248 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-10 01:01:29.842252 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-10 01:01:29.842257 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.842261 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-10 01:01:29.842266 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.842270 | orchestrator | 2026-03-10 01:01:29.842274 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-10 01:01:29.842278 | orchestrator | Tuesday 10 March 2026 00:52:54 +0000 (0:00:02.625) 0:04:00.016 ********* 2026-03-10 01:01:29.842283 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.842292 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.842297 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.842302 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.842308 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.842314 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.842319 | orchestrator | 2026-03-10 01:01:29.842324 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-10 01:01:29.842329 | orchestrator | Tuesday 10 March 2026 00:52:57 +0000 (0:00:02.994) 0:04:03.011 ********* 2026-03-10 01:01:29.842335 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.842340 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.842346 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.842351 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.842355 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.842360 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.842364 | orchestrator | 2026-03-10 01:01:29.842369 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-10 01:01:29.842374 | orchestrator | Tuesday 10 March 2026 00:52:58 +0000 (0:00:01.142) 0:04:04.153 ********* 2026-03-10 01:01:29.842379 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842384 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.842389 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.842395 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.842400 | orchestrator | 2026-03-10 01:01:29.842405 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-10 01:01:29.842420 | orchestrator | Tuesday 10 March 2026 00:52:59 +0000 (0:00:00.913) 0:04:05.066 ********* 2026-03-10 01:01:29.842425 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.842430 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.842436 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.842441 | orchestrator | 2026-03-10 01:01:29.842446 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-10 01:01:29.842478 | orchestrator | Tuesday 10 March 2026 00:52:59 +0000 (0:00:00.355) 0:04:05.421 ********* 2026-03-10 01:01:29.842484 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.842489 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.842494 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.842499 | orchestrator | 2026-03-10 01:01:29.842505 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-10 01:01:29.842510 | orchestrator | Tuesday 10 March 2026 00:53:01 +0000 (0:00:01.714) 0:04:07.136 ********* 2026-03-10 01:01:29.842515 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-10 01:01:29.842520 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-10 01:01:29.842525 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-10 01:01:29.842530 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.842535 | orchestrator | 2026-03-10 01:01:29.842540 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-10 01:01:29.842546 | orchestrator | Tuesday 10 March 2026 00:53:02 +0000 (0:00:00.685) 0:04:07.821 ********* 2026-03-10 01:01:29.842555 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.842561 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.842566 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.842579 | orchestrator | 2026-03-10 01:01:29.842584 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-10 01:01:29.842594 | orchestrator | Tuesday 10 March 2026 00:53:02 +0000 (0:00:00.363) 0:04:08.185 ********* 2026-03-10 01:01:29.842598 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.842603 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.842608 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.842613 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.842618 | orchestrator | 2026-03-10 01:01:29.842628 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-10 01:01:29.842633 | orchestrator | Tuesday 10 March 2026 00:53:03 +0000 (0:00:01.293) 0:04:09.478 ********* 2026-03-10 01:01:29.842637 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.842642 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.842648 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.842652 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842657 | orchestrator | 2026-03-10 01:01:29.842662 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-10 01:01:29.842667 | orchestrator | Tuesday 10 March 2026 00:53:04 +0000 (0:00:00.451) 0:04:09.929 ********* 2026-03-10 01:01:29.842671 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842677 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.842682 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.842687 | orchestrator | 2026-03-10 01:01:29.842691 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-10 01:01:29.842696 | orchestrator | Tuesday 10 March 2026 00:53:04 +0000 (0:00:00.432) 0:04:10.361 ********* 2026-03-10 01:01:29.842701 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842706 | orchestrator | 2026-03-10 01:01:29.842711 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-10 01:01:29.842716 | orchestrator | Tuesday 10 March 2026 00:53:05 +0000 (0:00:00.236) 0:04:10.597 ********* 2026-03-10 01:01:29.842720 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842725 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.842730 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.842736 | orchestrator | 2026-03-10 01:01:29.842742 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-10 01:01:29.842747 | orchestrator | Tuesday 10 March 2026 00:53:05 +0000 (0:00:00.457) 0:04:11.054 ********* 2026-03-10 01:01:29.842753 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842758 | orchestrator | 2026-03-10 01:01:29.842762 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-10 01:01:29.842767 | orchestrator | Tuesday 10 March 2026 00:53:05 +0000 (0:00:00.243) 0:04:11.298 ********* 2026-03-10 01:01:29.842773 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842778 | orchestrator | 2026-03-10 01:01:29.842783 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-10 01:01:29.842788 | orchestrator | Tuesday 10 March 2026 00:53:06 +0000 (0:00:00.318) 0:04:11.616 ********* 2026-03-10 01:01:29.842792 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842797 | orchestrator | 2026-03-10 01:01:29.842802 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-10 01:01:29.842808 | orchestrator | Tuesday 10 March 2026 00:53:06 +0000 (0:00:00.130) 0:04:11.746 ********* 2026-03-10 01:01:29.842813 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842819 | orchestrator | 2026-03-10 01:01:29.842824 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-10 01:01:29.842829 | orchestrator | Tuesday 10 March 2026 00:53:07 +0000 (0:00:00.910) 0:04:12.657 ********* 2026-03-10 01:01:29.842834 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842840 | orchestrator | 2026-03-10 01:01:29.842845 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-10 01:01:29.842850 | orchestrator | Tuesday 10 March 2026 00:53:07 +0000 (0:00:00.247) 0:04:12.904 ********* 2026-03-10 01:01:29.842856 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.842861 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.842866 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.842872 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842877 | orchestrator | 2026-03-10 01:01:29.842883 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-10 01:01:29.842900 | orchestrator | Tuesday 10 March 2026 00:53:07 +0000 (0:00:00.407) 0:04:13.312 ********* 2026-03-10 01:01:29.842906 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842911 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.842916 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.842922 | orchestrator | 2026-03-10 01:01:29.842927 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-10 01:01:29.842933 | orchestrator | Tuesday 10 March 2026 00:53:08 +0000 (0:00:00.365) 0:04:13.678 ********* 2026-03-10 01:01:29.842938 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842944 | orchestrator | 2026-03-10 01:01:29.842949 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-10 01:01:29.842955 | orchestrator | Tuesday 10 March 2026 00:53:08 +0000 (0:00:00.245) 0:04:13.923 ********* 2026-03-10 01:01:29.842961 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.842966 | orchestrator | 2026-03-10 01:01:29.842972 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-10 01:01:29.842978 | orchestrator | Tuesday 10 March 2026 00:53:08 +0000 (0:00:00.253) 0:04:14.176 ********* 2026-03-10 01:01:29.842983 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.842988 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.842994 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.842999 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.843005 | orchestrator | 2026-03-10 01:01:29.843014 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-10 01:01:29.843019 | orchestrator | Tuesday 10 March 2026 00:53:10 +0000 (0:00:01.414) 0:04:15.591 ********* 2026-03-10 01:01:29.843025 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.843032 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.843039 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.843045 | orchestrator | 2026-03-10 01:01:29.843050 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-10 01:01:29.843055 | orchestrator | Tuesday 10 March 2026 00:53:10 +0000 (0:00:00.515) 0:04:16.107 ********* 2026-03-10 01:01:29.843060 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.843066 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.843071 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.843077 | orchestrator | 2026-03-10 01:01:29.843082 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-10 01:01:29.843087 | orchestrator | Tuesday 10 March 2026 00:53:12 +0000 (0:00:01.399) 0:04:17.507 ********* 2026-03-10 01:01:29.843092 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.843098 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.843103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.843108 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.843114 | orchestrator | 2026-03-10 01:01:29.843119 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-10 01:01:29.843124 | orchestrator | Tuesday 10 March 2026 00:53:12 +0000 (0:00:00.944) 0:04:18.452 ********* 2026-03-10 01:01:29.843129 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.843135 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.843140 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.843146 | orchestrator | 2026-03-10 01:01:29.843151 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-10 01:01:29.843157 | orchestrator | Tuesday 10 March 2026 00:53:13 +0000 (0:00:00.749) 0:04:19.201 ********* 2026-03-10 01:01:29.843162 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.843169 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.843174 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.843179 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.843189 | orchestrator | 2026-03-10 01:01:29.843195 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-10 01:01:29.843200 | orchestrator | Tuesday 10 March 2026 00:53:14 +0000 (0:00:01.087) 0:04:20.289 ********* 2026-03-10 01:01:29.843205 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.843210 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.843216 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.843221 | orchestrator | 2026-03-10 01:01:29.843226 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-10 01:01:29.843231 | orchestrator | Tuesday 10 March 2026 00:53:15 +0000 (0:00:00.612) 0:04:20.902 ********* 2026-03-10 01:01:29.843237 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.843242 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.843248 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.843253 | orchestrator | 2026-03-10 01:01:29.843283 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-10 01:01:29.843294 | orchestrator | Tuesday 10 March 2026 00:53:16 +0000 (0:00:01.244) 0:04:22.146 ********* 2026-03-10 01:01:29.843300 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.843306 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.843311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.843316 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.843321 | orchestrator | 2026-03-10 01:01:29.843327 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-10 01:01:29.843332 | orchestrator | Tuesday 10 March 2026 00:53:17 +0000 (0:00:00.620) 0:04:22.767 ********* 2026-03-10 01:01:29.843338 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.843343 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.843349 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.843354 | orchestrator | 2026-03-10 01:01:29.843360 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-10 01:01:29.843366 | orchestrator | Tuesday 10 March 2026 00:53:17 +0000 (0:00:00.323) 0:04:23.090 ********* 2026-03-10 01:01:29.843372 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.843377 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.843383 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.843388 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.843393 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.843404 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.843410 | orchestrator | 2026-03-10 01:01:29.843416 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-10 01:01:29.843421 | orchestrator | Tuesday 10 March 2026 00:53:19 +0000 (0:00:01.501) 0:04:24.592 ********* 2026-03-10 01:01:29.843427 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.843433 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.843438 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.843444 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.843459 | orchestrator | 2026-03-10 01:01:29.843464 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-10 01:01:29.843469 | orchestrator | Tuesday 10 March 2026 00:53:20 +0000 (0:00:01.031) 0:04:25.624 ********* 2026-03-10 01:01:29.843475 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.843481 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.843486 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.843491 | orchestrator | 2026-03-10 01:01:29.843497 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-10 01:01:29.843503 | orchestrator | Tuesday 10 March 2026 00:53:20 +0000 (0:00:00.679) 0:04:26.303 ********* 2026-03-10 01:01:29.843508 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.843514 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.843538 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.843544 | orchestrator | 2026-03-10 01:01:29.843553 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-10 01:01:29.843563 | orchestrator | Tuesday 10 March 2026 00:53:22 +0000 (0:00:01.544) 0:04:27.848 ********* 2026-03-10 01:01:29.843569 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-10 01:01:29.843575 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-10 01:01:29.843580 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-10 01:01:29.843586 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.843592 | orchestrator | 2026-03-10 01:01:29.843598 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-10 01:01:29.843604 | orchestrator | Tuesday 10 March 2026 00:53:23 +0000 (0:00:00.657) 0:04:28.506 ********* 2026-03-10 01:01:29.843609 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.843615 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.843620 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.843625 | orchestrator | 2026-03-10 01:01:29.843631 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-10 01:01:29.843637 | orchestrator | 2026-03-10 01:01:29.843643 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 01:01:29.843648 | orchestrator | Tuesday 10 March 2026 00:53:24 +0000 (0:00:01.020) 0:04:29.526 ********* 2026-03-10 01:01:29.843654 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.843660 | orchestrator | 2026-03-10 01:01:29.843666 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 01:01:29.843671 | orchestrator | Tuesday 10 March 2026 00:53:24 +0000 (0:00:00.556) 0:04:30.083 ********* 2026-03-10 01:01:29.843677 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.843683 | orchestrator | 2026-03-10 01:01:29.843688 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 01:01:29.843693 | orchestrator | Tuesday 10 March 2026 00:53:25 +0000 (0:00:00.540) 0:04:30.624 ********* 2026-03-10 01:01:29.843700 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.843705 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.843711 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.843716 | orchestrator | 2026-03-10 01:01:29.843722 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 01:01:29.843728 | orchestrator | Tuesday 10 March 2026 00:53:26 +0000 (0:00:01.091) 0:04:31.715 ********* 2026-03-10 01:01:29.843733 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.843739 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.843745 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.843750 | orchestrator | 2026-03-10 01:01:29.843756 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 01:01:29.843761 | orchestrator | Tuesday 10 March 2026 00:53:26 +0000 (0:00:00.334) 0:04:32.050 ********* 2026-03-10 01:01:29.843767 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.843772 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.843778 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.843784 | orchestrator | 2026-03-10 01:01:29.843789 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 01:01:29.843795 | orchestrator | Tuesday 10 March 2026 00:53:26 +0000 (0:00:00.395) 0:04:32.445 ********* 2026-03-10 01:01:29.843801 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.843806 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.843812 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.843817 | orchestrator | 2026-03-10 01:01:29.843823 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 01:01:29.843828 | orchestrator | Tuesday 10 March 2026 00:53:27 +0000 (0:00:00.412) 0:04:32.858 ********* 2026-03-10 01:01:29.843834 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.843839 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.843849 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.843855 | orchestrator | 2026-03-10 01:01:29.843860 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 01:01:29.843865 | orchestrator | Tuesday 10 March 2026 00:53:28 +0000 (0:00:01.116) 0:04:33.974 ********* 2026-03-10 01:01:29.843870 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.843876 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.843881 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.843887 | orchestrator | 2026-03-10 01:01:29.843892 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 01:01:29.843898 | orchestrator | Tuesday 10 March 2026 00:53:28 +0000 (0:00:00.364) 0:04:34.338 ********* 2026-03-10 01:01:29.843909 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.843915 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.843920 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.843933 | orchestrator | 2026-03-10 01:01:29.843939 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 01:01:29.843945 | orchestrator | Tuesday 10 March 2026 00:53:29 +0000 (0:00:00.349) 0:04:34.688 ********* 2026-03-10 01:01:29.843951 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.843956 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.843962 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.843968 | orchestrator | 2026-03-10 01:01:29.843974 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 01:01:29.843979 | orchestrator | Tuesday 10 March 2026 00:53:30 +0000 (0:00:00.825) 0:04:35.514 ********* 2026-03-10 01:01:29.843985 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.843990 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.843995 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.844001 | orchestrator | 2026-03-10 01:01:29.844007 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 01:01:29.844012 | orchestrator | Tuesday 10 March 2026 00:53:31 +0000 (0:00:01.583) 0:04:37.097 ********* 2026-03-10 01:01:29.844018 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.844023 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.844029 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.844034 | orchestrator | 2026-03-10 01:01:29.844043 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 01:01:29.844049 | orchestrator | Tuesday 10 March 2026 00:53:31 +0000 (0:00:00.360) 0:04:37.458 ********* 2026-03-10 01:01:29.844054 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.844060 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.844066 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.844071 | orchestrator | 2026-03-10 01:01:29.844076 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 01:01:29.844082 | orchestrator | Tuesday 10 March 2026 00:53:32 +0000 (0:00:00.509) 0:04:37.967 ********* 2026-03-10 01:01:29.844088 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.844093 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.844099 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.844105 | orchestrator | 2026-03-10 01:01:29.844111 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 01:01:29.844117 | orchestrator | Tuesday 10 March 2026 00:53:33 +0000 (0:00:00.721) 0:04:38.689 ********* 2026-03-10 01:01:29.844122 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.844128 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.844133 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.844139 | orchestrator | 2026-03-10 01:01:29.844144 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 01:01:29.844149 | orchestrator | Tuesday 10 March 2026 00:53:34 +0000 (0:00:00.916) 0:04:39.605 ********* 2026-03-10 01:01:29.844154 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.844160 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.844165 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.844171 | orchestrator | 2026-03-10 01:01:29.844180 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 01:01:29.844186 | orchestrator | Tuesday 10 March 2026 00:53:34 +0000 (0:00:00.469) 0:04:40.075 ********* 2026-03-10 01:01:29.844192 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.844197 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.844202 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.844208 | orchestrator | 2026-03-10 01:01:29.844213 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 01:01:29.844218 | orchestrator | Tuesday 10 March 2026 00:53:34 +0000 (0:00:00.375) 0:04:40.451 ********* 2026-03-10 01:01:29.844224 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.844229 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.844234 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.844240 | orchestrator | 2026-03-10 01:01:29.844245 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 01:01:29.844251 | orchestrator | Tuesday 10 March 2026 00:53:35 +0000 (0:00:00.413) 0:04:40.865 ********* 2026-03-10 01:01:29.844256 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.844261 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.844266 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.844271 | orchestrator | 2026-03-10 01:01:29.844277 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 01:01:29.844283 | orchestrator | Tuesday 10 March 2026 00:53:35 +0000 (0:00:00.480) 0:04:41.346 ********* 2026-03-10 01:01:29.844288 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.844294 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.844299 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.844306 | orchestrator | 2026-03-10 01:01:29.844313 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 01:01:29.844319 | orchestrator | Tuesday 10 March 2026 00:53:36 +0000 (0:00:00.721) 0:04:42.068 ********* 2026-03-10 01:01:29.844325 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.844330 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.844334 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.844342 | orchestrator | 2026-03-10 01:01:29.844347 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-10 01:01:29.844353 | orchestrator | Tuesday 10 March 2026 00:53:37 +0000 (0:00:00.743) 0:04:42.811 ********* 2026-03-10 01:01:29.844358 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.844363 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.844368 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.844374 | orchestrator | 2026-03-10 01:01:29.844379 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-10 01:01:29.844384 | orchestrator | Tuesday 10 March 2026 00:53:37 +0000 (0:00:00.413) 0:04:43.225 ********* 2026-03-10 01:01:29.844390 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.844395 | orchestrator | 2026-03-10 01:01:29.844401 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-10 01:01:29.844406 | orchestrator | Tuesday 10 March 2026 00:53:38 +0000 (0:00:01.072) 0:04:44.298 ********* 2026-03-10 01:01:29.844411 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.844417 | orchestrator | 2026-03-10 01:01:29.844427 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-10 01:01:29.844433 | orchestrator | Tuesday 10 March 2026 00:53:38 +0000 (0:00:00.184) 0:04:44.482 ********* 2026-03-10 01:01:29.844438 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-10 01:01:29.844444 | orchestrator | 2026-03-10 01:01:29.844479 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-10 01:01:29.844486 | orchestrator | Tuesday 10 March 2026 00:53:40 +0000 (0:00:01.537) 0:04:46.020 ********* 2026-03-10 01:01:29.844492 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.844498 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.844504 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.844511 | orchestrator | 2026-03-10 01:01:29.844523 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-10 01:01:29.844529 | orchestrator | Tuesday 10 March 2026 00:53:41 +0000 (0:00:00.651) 0:04:46.671 ********* 2026-03-10 01:01:29.844535 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.844541 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.844547 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.844552 | orchestrator | 2026-03-10 01:01:29.844558 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-10 01:01:29.844564 | orchestrator | Tuesday 10 March 2026 00:53:42 +0000 (0:00:00.987) 0:04:47.658 ********* 2026-03-10 01:01:29.844571 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.844576 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.844583 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.844595 | orchestrator | 2026-03-10 01:01:29.844602 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-10 01:01:29.844608 | orchestrator | Tuesday 10 March 2026 00:53:43 +0000 (0:00:01.429) 0:04:49.088 ********* 2026-03-10 01:01:29.844614 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.844620 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.844626 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.844631 | orchestrator | 2026-03-10 01:01:29.844637 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-10 01:01:29.844643 | orchestrator | Tuesday 10 March 2026 00:53:44 +0000 (0:00:00.868) 0:04:49.957 ********* 2026-03-10 01:01:29.844649 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.844655 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.844661 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.844667 | orchestrator | 2026-03-10 01:01:29.844673 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-10 01:01:29.844680 | orchestrator | Tuesday 10 March 2026 00:53:45 +0000 (0:00:00.934) 0:04:50.891 ********* 2026-03-10 01:01:29.844686 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.844692 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.844698 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.844704 | orchestrator | 2026-03-10 01:01:29.844710 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-10 01:01:29.844716 | orchestrator | Tuesday 10 March 2026 00:53:46 +0000 (0:00:00.929) 0:04:51.821 ********* 2026-03-10 01:01:29.844722 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.844728 | orchestrator | 2026-03-10 01:01:29.844734 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-10 01:01:29.844740 | orchestrator | Tuesday 10 March 2026 00:53:48 +0000 (0:00:02.570) 0:04:54.392 ********* 2026-03-10 01:01:29.844746 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.844752 | orchestrator | 2026-03-10 01:01:29.844758 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-10 01:01:29.844765 | orchestrator | Tuesday 10 March 2026 00:53:50 +0000 (0:00:01.421) 0:04:55.814 ********* 2026-03-10 01:01:29.844770 | orchestrator | changed: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:01:29.844776 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-10 01:01:29.844781 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:01:29.844787 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-10 01:01:29.844793 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:01:29.844799 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:01:29.844805 | orchestrator | changed: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:01:29.844811 | orchestrator | changed: [testbed-node-1 -> {{ item }}] 2026-03-10 01:01:29.844817 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-10 01:01:29.844823 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:01:29.844829 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-10 01:01:29.844839 | orchestrator | ok: [testbed-node-0 -> {{ item }}] 2026-03-10 01:01:29.844846 | orchestrator | 2026-03-10 01:01:29.844852 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-10 01:01:29.844858 | orchestrator | Tuesday 10 March 2026 00:53:55 +0000 (0:00:04.753) 0:05:00.568 ********* 2026-03-10 01:01:29.844863 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.844869 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.844875 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.844881 | orchestrator | 2026-03-10 01:01:29.844887 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-10 01:01:29.844893 | orchestrator | Tuesday 10 March 2026 00:53:56 +0000 (0:00:01.884) 0:05:02.452 ********* 2026-03-10 01:01:29.844899 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.844906 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.844912 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.844918 | orchestrator | 2026-03-10 01:01:29.844924 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-10 01:01:29.844930 | orchestrator | Tuesday 10 March 2026 00:53:57 +0000 (0:00:00.517) 0:05:02.970 ********* 2026-03-10 01:01:29.844936 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.844942 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.844948 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.844954 | orchestrator | 2026-03-10 01:01:29.844960 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-10 01:01:29.844966 | orchestrator | Tuesday 10 March 2026 00:53:58 +0000 (0:00:00.658) 0:05:03.628 ********* 2026-03-10 01:01:29.844972 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.844982 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.844988 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.844994 | orchestrator | 2026-03-10 01:01:29.845000 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-10 01:01:29.845007 | orchestrator | Tuesday 10 March 2026 00:54:00 +0000 (0:00:01.948) 0:05:05.576 ********* 2026-03-10 01:01:29.845013 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.845019 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.845025 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.845031 | orchestrator | 2026-03-10 01:01:29.845037 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-10 01:01:29.845043 | orchestrator | Tuesday 10 March 2026 00:54:01 +0000 (0:00:01.498) 0:05:07.074 ********* 2026-03-10 01:01:29.845049 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845055 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845062 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845067 | orchestrator | 2026-03-10 01:01:29.845073 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-10 01:01:29.845080 | orchestrator | Tuesday 10 March 2026 00:54:02 +0000 (0:00:00.495) 0:05:07.570 ********* 2026-03-10 01:01:29.845085 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.845090 | orchestrator | 2026-03-10 01:01:29.845099 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-10 01:01:29.845105 | orchestrator | Tuesday 10 March 2026 00:54:02 +0000 (0:00:00.802) 0:05:08.372 ********* 2026-03-10 01:01:29.845110 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845116 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845122 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845128 | orchestrator | 2026-03-10 01:01:29.845134 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-10 01:01:29.845140 | orchestrator | Tuesday 10 March 2026 00:54:03 +0000 (0:00:00.687) 0:05:09.060 ********* 2026-03-10 01:01:29.845146 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845151 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845157 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845163 | orchestrator | 2026-03-10 01:01:29.845173 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-10 01:01:29.845179 | orchestrator | Tuesday 10 March 2026 00:54:04 +0000 (0:00:00.522) 0:05:09.582 ********* 2026-03-10 01:01:29.845186 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.845191 | orchestrator | 2026-03-10 01:01:29.845196 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-10 01:01:29.845202 | orchestrator | Tuesday 10 March 2026 00:54:04 +0000 (0:00:00.863) 0:05:10.446 ********* 2026-03-10 01:01:29.845207 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.845213 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.845219 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.845224 | orchestrator | 2026-03-10 01:01:29.845230 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-10 01:01:29.845235 | orchestrator | Tuesday 10 March 2026 00:54:07 +0000 (0:00:02.493) 0:05:12.940 ********* 2026-03-10 01:01:29.845242 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.845248 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.845253 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.845259 | orchestrator | 2026-03-10 01:01:29.845265 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-10 01:01:29.845271 | orchestrator | Tuesday 10 March 2026 00:54:09 +0000 (0:00:01.647) 0:05:14.587 ********* 2026-03-10 01:01:29.845277 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.845282 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.845287 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.845292 | orchestrator | 2026-03-10 01:01:29.845298 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-10 01:01:29.845304 | orchestrator | Tuesday 10 March 2026 00:54:11 +0000 (0:00:02.263) 0:05:16.851 ********* 2026-03-10 01:01:29.845310 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.845315 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.845321 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.845327 | orchestrator | 2026-03-10 01:01:29.845333 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-10 01:01:29.845338 | orchestrator | Tuesday 10 March 2026 00:54:14 +0000 (0:00:02.770) 0:05:19.622 ********* 2026-03-10 01:01:29.845344 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.845349 | orchestrator | 2026-03-10 01:01:29.845355 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-10 01:01:29.845361 | orchestrator | Tuesday 10 March 2026 00:54:15 +0000 (0:00:00.868) 0:05:20.490 ********* 2026-03-10 01:01:29.845366 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2026-03-10 01:01:29.845372 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.845378 | orchestrator | 2026-03-10 01:01:29.845384 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-10 01:01:29.845389 | orchestrator | Tuesday 10 March 2026 00:54:36 +0000 (0:00:21.923) 0:05:42.413 ********* 2026-03-10 01:01:29.845395 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.845400 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.845406 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.845411 | orchestrator | 2026-03-10 01:01:29.845416 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-10 01:01:29.845421 | orchestrator | Tuesday 10 March 2026 00:54:46 +0000 (0:00:09.908) 0:05:52.322 ********* 2026-03-10 01:01:29.845426 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845432 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845438 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845443 | orchestrator | 2026-03-10 01:01:29.845458 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-10 01:01:29.845470 | orchestrator | Tuesday 10 March 2026 00:54:47 +0000 (0:00:00.589) 0:05:52.911 ********* 2026-03-10 01:01:29.845483 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8215cd795d865842638b19b216d8d13200e1eccc'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-10 01:01:29.845490 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8215cd795d865842638b19b216d8d13200e1eccc'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-10 01:01:29.845500 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8215cd795d865842638b19b216d8d13200e1eccc'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-10 01:01:29.845507 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8215cd795d865842638b19b216d8d13200e1eccc'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-10 01:01:29.845513 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8215cd795d865842638b19b216d8d13200e1eccc'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-10 01:01:29.845519 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__8215cd795d865842638b19b216d8d13200e1eccc'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__8215cd795d865842638b19b216d8d13200e1eccc'}])  2026-03-10 01:01:29.845526 | orchestrator | 2026-03-10 01:01:29.845532 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-10 01:01:29.845538 | orchestrator | Tuesday 10 March 2026 00:55:02 +0000 (0:00:15.306) 0:06:08.217 ********* 2026-03-10 01:01:29.845543 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845549 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845554 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845559 | orchestrator | 2026-03-10 01:01:29.845563 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-10 01:01:29.845568 | orchestrator | Tuesday 10 March 2026 00:55:03 +0000 (0:00:00.355) 0:06:08.573 ********* 2026-03-10 01:01:29.845574 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.845578 | orchestrator | 2026-03-10 01:01:29.845583 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-10 01:01:29.845588 | orchestrator | Tuesday 10 March 2026 00:55:03 +0000 (0:00:00.900) 0:06:09.473 ********* 2026-03-10 01:01:29.845593 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.845598 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.845602 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.845608 | orchestrator | 2026-03-10 01:01:29.845613 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-10 01:01:29.845618 | orchestrator | Tuesday 10 March 2026 00:55:04 +0000 (0:00:00.398) 0:06:09.872 ********* 2026-03-10 01:01:29.845626 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845629 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845632 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845636 | orchestrator | 2026-03-10 01:01:29.845639 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-10 01:01:29.845642 | orchestrator | Tuesday 10 March 2026 00:55:04 +0000 (0:00:00.505) 0:06:10.377 ********* 2026-03-10 01:01:29.845645 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-10 01:01:29.845648 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-10 01:01:29.845651 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-10 01:01:29.845654 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845657 | orchestrator | 2026-03-10 01:01:29.845660 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-10 01:01:29.845663 | orchestrator | Tuesday 10 March 2026 00:55:06 +0000 (0:00:01.443) 0:06:11.821 ********* 2026-03-10 01:01:29.845666 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.845670 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.845679 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.845682 | orchestrator | 2026-03-10 01:01:29.845685 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-10 01:01:29.845689 | orchestrator | 2026-03-10 01:01:29.845692 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 01:01:29.845695 | orchestrator | Tuesday 10 March 2026 00:55:07 +0000 (0:00:00.695) 0:06:12.516 ********* 2026-03-10 01:01:29.845698 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.845701 | orchestrator | 2026-03-10 01:01:29.845705 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 01:01:29.845708 | orchestrator | Tuesday 10 March 2026 00:55:07 +0000 (0:00:00.637) 0:06:13.153 ********* 2026-03-10 01:01:29.845711 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.845714 | orchestrator | 2026-03-10 01:01:29.845717 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 01:01:29.845720 | orchestrator | Tuesday 10 March 2026 00:55:08 +0000 (0:00:00.969) 0:06:14.123 ********* 2026-03-10 01:01:29.845723 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.845726 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.845732 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.845737 | orchestrator | 2026-03-10 01:01:29.845742 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 01:01:29.845748 | orchestrator | Tuesday 10 March 2026 00:55:09 +0000 (0:00:00.783) 0:06:14.906 ********* 2026-03-10 01:01:29.845753 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845756 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845759 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845762 | orchestrator | 2026-03-10 01:01:29.845765 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 01:01:29.845769 | orchestrator | Tuesday 10 March 2026 00:55:09 +0000 (0:00:00.386) 0:06:15.292 ********* 2026-03-10 01:01:29.845772 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845775 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845778 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845781 | orchestrator | 2026-03-10 01:01:29.845784 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 01:01:29.845787 | orchestrator | Tuesday 10 March 2026 00:55:10 +0000 (0:00:00.774) 0:06:16.067 ********* 2026-03-10 01:01:29.845790 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845793 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845796 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845799 | orchestrator | 2026-03-10 01:01:29.845802 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 01:01:29.845808 | orchestrator | Tuesday 10 March 2026 00:55:11 +0000 (0:00:00.525) 0:06:16.593 ********* 2026-03-10 01:01:29.845811 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.845814 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.845817 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.845821 | orchestrator | 2026-03-10 01:01:29.845824 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 01:01:29.845827 | orchestrator | Tuesday 10 March 2026 00:55:12 +0000 (0:00:01.002) 0:06:17.596 ********* 2026-03-10 01:01:29.845830 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845833 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845836 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845839 | orchestrator | 2026-03-10 01:01:29.845842 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 01:01:29.845846 | orchestrator | Tuesday 10 March 2026 00:55:12 +0000 (0:00:00.344) 0:06:17.940 ********* 2026-03-10 01:01:29.845852 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845858 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845863 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845868 | orchestrator | 2026-03-10 01:01:29.845874 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 01:01:29.845878 | orchestrator | Tuesday 10 March 2026 00:55:13 +0000 (0:00:00.640) 0:06:18.581 ********* 2026-03-10 01:01:29.845881 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.845884 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.845887 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.845890 | orchestrator | 2026-03-10 01:01:29.845893 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 01:01:29.845896 | orchestrator | Tuesday 10 March 2026 00:55:13 +0000 (0:00:00.794) 0:06:19.376 ********* 2026-03-10 01:01:29.845899 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.845903 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.845906 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.845909 | orchestrator | 2026-03-10 01:01:29.845912 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 01:01:29.845915 | orchestrator | Tuesday 10 March 2026 00:55:14 +0000 (0:00:00.851) 0:06:20.228 ********* 2026-03-10 01:01:29.845918 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845921 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845926 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845932 | orchestrator | 2026-03-10 01:01:29.845937 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 01:01:29.845943 | orchestrator | Tuesday 10 March 2026 00:55:15 +0000 (0:00:00.384) 0:06:20.612 ********* 2026-03-10 01:01:29.845946 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.845949 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.845952 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.845955 | orchestrator | 2026-03-10 01:01:29.845958 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 01:01:29.845962 | orchestrator | Tuesday 10 March 2026 00:55:15 +0000 (0:00:00.805) 0:06:21.418 ********* 2026-03-10 01:01:29.845965 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845968 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845971 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845974 | orchestrator | 2026-03-10 01:01:29.845977 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 01:01:29.845983 | orchestrator | Tuesday 10 March 2026 00:55:16 +0000 (0:00:00.455) 0:06:21.873 ********* 2026-03-10 01:01:29.845986 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.845989 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.845992 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.845995 | orchestrator | 2026-03-10 01:01:29.845998 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 01:01:29.846002 | orchestrator | Tuesday 10 March 2026 00:55:16 +0000 (0:00:00.357) 0:06:22.231 ********* 2026-03-10 01:01:29.846007 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.846010 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.846131 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.846142 | orchestrator | 2026-03-10 01:01:29.846148 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 01:01:29.846154 | orchestrator | Tuesday 10 March 2026 00:55:17 +0000 (0:00:00.372) 0:06:22.603 ********* 2026-03-10 01:01:29.846159 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.846165 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.846171 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.846176 | orchestrator | 2026-03-10 01:01:29.846182 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 01:01:29.846187 | orchestrator | Tuesday 10 March 2026 00:55:17 +0000 (0:00:00.355) 0:06:22.959 ********* 2026-03-10 01:01:29.846193 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.846198 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.846203 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.846209 | orchestrator | 2026-03-10 01:01:29.846219 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 01:01:29.846225 | orchestrator | Tuesday 10 March 2026 00:55:18 +0000 (0:00:00.691) 0:06:23.650 ********* 2026-03-10 01:01:29.846231 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.846237 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.846242 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.846247 | orchestrator | 2026-03-10 01:01:29.846250 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 01:01:29.846253 | orchestrator | Tuesday 10 March 2026 00:55:18 +0000 (0:00:00.449) 0:06:24.099 ********* 2026-03-10 01:01:29.846256 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.846259 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.846262 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.846283 | orchestrator | 2026-03-10 01:01:29.846291 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 01:01:29.846294 | orchestrator | Tuesday 10 March 2026 00:55:18 +0000 (0:00:00.331) 0:06:24.430 ********* 2026-03-10 01:01:29.846297 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.846300 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.846303 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.846306 | orchestrator | 2026-03-10 01:01:29.846309 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-10 01:01:29.846312 | orchestrator | Tuesday 10 March 2026 00:55:19 +0000 (0:00:00.838) 0:06:25.269 ********* 2026-03-10 01:01:29.846316 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-10 01:01:29.846319 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:01:29.846322 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:01:29.846325 | orchestrator | 2026-03-10 01:01:29.846328 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-10 01:01:29.846332 | orchestrator | Tuesday 10 March 2026 00:55:20 +0000 (0:00:00.757) 0:06:26.027 ********* 2026-03-10 01:01:29.846335 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.846338 | orchestrator | 2026-03-10 01:01:29.846341 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-10 01:01:29.846344 | orchestrator | Tuesday 10 March 2026 00:55:21 +0000 (0:00:00.637) 0:06:26.665 ********* 2026-03-10 01:01:29.846347 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.846350 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.846353 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.846357 | orchestrator | 2026-03-10 01:01:29.846360 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-10 01:01:29.846363 | orchestrator | Tuesday 10 March 2026 00:55:21 +0000 (0:00:00.713) 0:06:27.378 ********* 2026-03-10 01:01:29.846371 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.846374 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.846377 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.846380 | orchestrator | 2026-03-10 01:01:29.846383 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-10 01:01:29.846386 | orchestrator | Tuesday 10 March 2026 00:55:22 +0000 (0:00:00.678) 0:06:28.057 ********* 2026-03-10 01:01:29.846389 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 01:01:29.846393 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 01:01:29.846396 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 01:01:29.846399 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-10 01:01:29.846402 | orchestrator | 2026-03-10 01:01:29.846405 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-10 01:01:29.846408 | orchestrator | Tuesday 10 March 2026 00:55:33 +0000 (0:00:10.433) 0:06:38.491 ********* 2026-03-10 01:01:29.846411 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.846414 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.846417 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.846420 | orchestrator | 2026-03-10 01:01:29.846423 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-10 01:01:29.846426 | orchestrator | Tuesday 10 March 2026 00:55:33 +0000 (0:00:00.414) 0:06:38.906 ********* 2026-03-10 01:01:29.846429 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-10 01:01:29.846433 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-10 01:01:29.846436 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-10 01:01:29.846439 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:01:29.846442 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-10 01:01:29.846491 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:01:29.846497 | orchestrator | 2026-03-10 01:01:29.846500 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-10 01:01:29.846503 | orchestrator | Tuesday 10 March 2026 00:55:35 +0000 (0:00:02.507) 0:06:41.413 ********* 2026-03-10 01:01:29.846509 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-10 01:01:29.846515 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-10 01:01:29.846521 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-10 01:01:29.846525 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 01:01:29.846528 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-10 01:01:29.846531 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-10 01:01:29.846535 | orchestrator | 2026-03-10 01:01:29.846538 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-10 01:01:29.846541 | orchestrator | Tuesday 10 March 2026 00:55:37 +0000 (0:00:01.842) 0:06:43.256 ********* 2026-03-10 01:01:29.846544 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.846547 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.846550 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.846553 | orchestrator | 2026-03-10 01:01:29.846556 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-10 01:01:29.846559 | orchestrator | Tuesday 10 March 2026 00:55:38 +0000 (0:00:00.762) 0:06:44.018 ********* 2026-03-10 01:01:29.846566 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.846569 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.846572 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.846576 | orchestrator | 2026-03-10 01:01:29.846582 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-10 01:01:29.846587 | orchestrator | Tuesday 10 March 2026 00:55:38 +0000 (0:00:00.337) 0:06:44.356 ********* 2026-03-10 01:01:29.846593 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.846598 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.846603 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.846611 | orchestrator | 2026-03-10 01:01:29.846614 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-10 01:01:29.846617 | orchestrator | Tuesday 10 March 2026 00:55:39 +0000 (0:00:00.378) 0:06:44.734 ********* 2026-03-10 01:01:29.846621 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.846624 | orchestrator | 2026-03-10 01:01:29.846627 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-10 01:01:29.846630 | orchestrator | Tuesday 10 March 2026 00:55:40 +0000 (0:00:00.909) 0:06:45.644 ********* 2026-03-10 01:01:29.846633 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.846636 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.846639 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.846642 | orchestrator | 2026-03-10 01:01:29.846645 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-10 01:01:29.846649 | orchestrator | Tuesday 10 March 2026 00:55:40 +0000 (0:00:00.432) 0:06:46.076 ********* 2026-03-10 01:01:29.846652 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.846655 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.846658 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.846662 | orchestrator | 2026-03-10 01:01:29.846667 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-10 01:01:29.846673 | orchestrator | Tuesday 10 March 2026 00:55:41 +0000 (0:00:00.421) 0:06:46.498 ********* 2026-03-10 01:01:29.846676 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.846680 | orchestrator | 2026-03-10 01:01:29.846683 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-10 01:01:29.846686 | orchestrator | Tuesday 10 March 2026 00:55:41 +0000 (0:00:00.970) 0:06:47.468 ********* 2026-03-10 01:01:29.846689 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.846692 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.846695 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.846698 | orchestrator | 2026-03-10 01:01:29.846701 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-10 01:01:29.846704 | orchestrator | Tuesday 10 March 2026 00:55:43 +0000 (0:00:01.416) 0:06:48.885 ********* 2026-03-10 01:01:29.846707 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.846710 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.846714 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.846717 | orchestrator | 2026-03-10 01:01:29.846720 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-10 01:01:29.846723 | orchestrator | Tuesday 10 March 2026 00:55:44 +0000 (0:00:01.166) 0:06:50.051 ********* 2026-03-10 01:01:29.846726 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.846729 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.846732 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.846735 | orchestrator | 2026-03-10 01:01:29.846738 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-10 01:01:29.846741 | orchestrator | Tuesday 10 March 2026 00:55:46 +0000 (0:00:01.837) 0:06:51.889 ********* 2026-03-10 01:01:29.846744 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.846747 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.846752 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.846758 | orchestrator | 2026-03-10 01:01:29.846763 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-10 01:01:29.846769 | orchestrator | Tuesday 10 March 2026 00:55:48 +0000 (0:00:02.226) 0:06:54.115 ********* 2026-03-10 01:01:29.846775 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.846781 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.846786 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-10 01:01:29.846793 | orchestrator | 2026-03-10 01:01:29.846798 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-10 01:01:29.846808 | orchestrator | Tuesday 10 March 2026 00:55:49 +0000 (0:00:00.548) 0:06:54.664 ********* 2026-03-10 01:01:29.846825 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-10 01:01:29.846829 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-10 01:01:29.846833 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-10 01:01:29.846837 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-10 01:01:29.846840 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-10 01:01:29.846844 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:01:29.846848 | orchestrator | 2026-03-10 01:01:29.846851 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-10 01:01:29.846855 | orchestrator | Tuesday 10 March 2026 00:56:19 +0000 (0:00:30.258) 0:07:24.922 ********* 2026-03-10 01:01:29.846858 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:01:29.846862 | orchestrator | 2026-03-10 01:01:29.846865 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-10 01:01:29.846869 | orchestrator | Tuesday 10 March 2026 00:56:20 +0000 (0:00:01.283) 0:07:26.206 ********* 2026-03-10 01:01:29.846879 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.846886 | orchestrator | 2026-03-10 01:01:29.846889 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-10 01:01:29.846893 | orchestrator | Tuesday 10 March 2026 00:56:21 +0000 (0:00:00.333) 0:07:26.540 ********* 2026-03-10 01:01:29.846897 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.846900 | orchestrator | 2026-03-10 01:01:29.846904 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-10 01:01:29.846907 | orchestrator | Tuesday 10 March 2026 00:56:21 +0000 (0:00:00.153) 0:07:26.693 ********* 2026-03-10 01:01:29.846911 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-10 01:01:29.846915 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-10 01:01:29.846919 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-10 01:01:29.846922 | orchestrator | 2026-03-10 01:01:29.846926 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-10 01:01:29.846929 | orchestrator | Tuesday 10 March 2026 00:56:27 +0000 (0:00:06.744) 0:07:33.438 ********* 2026-03-10 01:01:29.846933 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-10 01:01:29.846937 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-10 01:01:29.846941 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-10 01:01:29.846944 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-10 01:01:29.846948 | orchestrator | 2026-03-10 01:01:29.846951 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-10 01:01:29.846955 | orchestrator | Tuesday 10 March 2026 00:56:33 +0000 (0:00:05.542) 0:07:38.981 ********* 2026-03-10 01:01:29.846958 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.846962 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.846965 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.846969 | orchestrator | 2026-03-10 01:01:29.846972 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-10 01:01:29.846976 | orchestrator | Tuesday 10 March 2026 00:56:34 +0000 (0:00:00.640) 0:07:39.621 ********* 2026-03-10 01:01:29.846981 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.846986 | orchestrator | 2026-03-10 01:01:29.846991 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-10 01:01:29.846999 | orchestrator | Tuesday 10 March 2026 00:56:35 +0000 (0:00:00.887) 0:07:40.509 ********* 2026-03-10 01:01:29.847005 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.847011 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.847016 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.847022 | orchestrator | 2026-03-10 01:01:29.847027 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-10 01:01:29.847033 | orchestrator | Tuesday 10 March 2026 00:56:35 +0000 (0:00:00.408) 0:07:40.917 ********* 2026-03-10 01:01:29.847039 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.847045 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.847051 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.847056 | orchestrator | 2026-03-10 01:01:29.847063 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-10 01:01:29.847069 | orchestrator | Tuesday 10 March 2026 00:56:36 +0000 (0:00:01.413) 0:07:42.330 ********* 2026-03-10 01:01:29.847074 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-10 01:01:29.847080 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-10 01:01:29.847086 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-10 01:01:29.847092 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.847097 | orchestrator | 2026-03-10 01:01:29.847103 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-10 01:01:29.847107 | orchestrator | Tuesday 10 March 2026 00:56:37 +0000 (0:00:01.006) 0:07:43.337 ********* 2026-03-10 01:01:29.847111 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.847114 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.847118 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.847121 | orchestrator | 2026-03-10 01:01:29.847125 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-10 01:01:29.847129 | orchestrator | 2026-03-10 01:01:29.847132 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 01:01:29.847136 | orchestrator | Tuesday 10 March 2026 00:56:38 +0000 (0:00:01.005) 0:07:44.342 ********* 2026-03-10 01:01:29.847152 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.847156 | orchestrator | 2026-03-10 01:01:29.847159 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 01:01:29.847162 | orchestrator | Tuesday 10 March 2026 00:56:39 +0000 (0:00:00.814) 0:07:45.157 ********* 2026-03-10 01:01:29.847165 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.847191 | orchestrator | 2026-03-10 01:01:29.847197 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 01:01:29.847200 | orchestrator | Tuesday 10 March 2026 00:56:40 +0000 (0:00:00.939) 0:07:46.097 ********* 2026-03-10 01:01:29.847204 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.847207 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.847210 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.847213 | orchestrator | 2026-03-10 01:01:29.847216 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 01:01:29.847219 | orchestrator | Tuesday 10 March 2026 00:56:40 +0000 (0:00:00.347) 0:07:46.444 ********* 2026-03-10 01:01:29.847222 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.847225 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.847228 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.847232 | orchestrator | 2026-03-10 01:01:29.847241 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 01:01:29.847245 | orchestrator | Tuesday 10 March 2026 00:56:41 +0000 (0:00:00.816) 0:07:47.261 ********* 2026-03-10 01:01:29.847248 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.847252 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.847255 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.847263 | orchestrator | 2026-03-10 01:01:29.847269 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 01:01:29.847275 | orchestrator | Tuesday 10 March 2026 00:56:42 +0000 (0:00:00.733) 0:07:47.994 ********* 2026-03-10 01:01:29.847280 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.847285 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.847290 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.847293 | orchestrator | 2026-03-10 01:01:29.847296 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 01:01:29.847299 | orchestrator | Tuesday 10 March 2026 00:56:43 +0000 (0:00:01.085) 0:07:49.080 ********* 2026-03-10 01:01:29.847302 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.847307 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.847313 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.847318 | orchestrator | 2026-03-10 01:01:29.847325 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 01:01:29.847339 | orchestrator | Tuesday 10 March 2026 00:56:43 +0000 (0:00:00.339) 0:07:49.420 ********* 2026-03-10 01:01:29.847343 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.847346 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.847349 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.847352 | orchestrator | 2026-03-10 01:01:29.847355 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 01:01:29.847358 | orchestrator | Tuesday 10 March 2026 00:56:44 +0000 (0:00:00.381) 0:07:49.801 ********* 2026-03-10 01:01:29.847361 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.847364 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.847367 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.847370 | orchestrator | 2026-03-10 01:01:29.847373 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 01:01:29.847377 | orchestrator | Tuesday 10 March 2026 00:56:44 +0000 (0:00:00.347) 0:07:50.149 ********* 2026-03-10 01:01:29.847380 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.847383 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.847386 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.847389 | orchestrator | 2026-03-10 01:01:29.847392 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 01:01:29.847396 | orchestrator | Tuesday 10 March 2026 00:56:45 +0000 (0:00:01.156) 0:07:51.306 ********* 2026-03-10 01:01:29.847399 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.847402 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.847405 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.847408 | orchestrator | 2026-03-10 01:01:29.847411 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 01:01:29.847414 | orchestrator | Tuesday 10 March 2026 00:56:46 +0000 (0:00:00.828) 0:07:52.134 ********* 2026-03-10 01:01:29.847417 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.847420 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.847423 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.847426 | orchestrator | 2026-03-10 01:01:29.847429 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 01:01:29.847433 | orchestrator | Tuesday 10 March 2026 00:56:47 +0000 (0:00:00.481) 0:07:52.615 ********* 2026-03-10 01:01:29.847436 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.847439 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.847442 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.847445 | orchestrator | 2026-03-10 01:01:29.847457 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 01:01:29.847462 | orchestrator | Tuesday 10 March 2026 00:56:47 +0000 (0:00:00.407) 0:07:53.023 ********* 2026-03-10 01:01:29.847469 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.847472 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.847475 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.847478 | orchestrator | 2026-03-10 01:01:29.847481 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 01:01:29.847487 | orchestrator | Tuesday 10 March 2026 00:56:48 +0000 (0:00:00.771) 0:07:53.795 ********* 2026-03-10 01:01:29.847490 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.847493 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.847496 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.847499 | orchestrator | 2026-03-10 01:01:29.847502 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 01:01:29.847505 | orchestrator | Tuesday 10 March 2026 00:56:48 +0000 (0:00:00.404) 0:07:54.200 ********* 2026-03-10 01:01:29.847508 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.847511 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.847529 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.847533 | orchestrator | 2026-03-10 01:01:29.847536 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 01:01:29.847539 | orchestrator | Tuesday 10 March 2026 00:56:49 +0000 (0:00:00.472) 0:07:54.673 ********* 2026-03-10 01:01:29.847542 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.847545 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.847548 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.847551 | orchestrator | 2026-03-10 01:01:29.847554 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 01:01:29.847557 | orchestrator | Tuesday 10 March 2026 00:56:49 +0000 (0:00:00.319) 0:07:54.992 ********* 2026-03-10 01:01:29.847560 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.847563 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.847567 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.847570 | orchestrator | 2026-03-10 01:01:29.847573 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 01:01:29.847576 | orchestrator | Tuesday 10 March 2026 00:56:50 +0000 (0:00:00.747) 0:07:55.740 ********* 2026-03-10 01:01:29.847579 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.847582 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.847585 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.847588 | orchestrator | 2026-03-10 01:01:29.847591 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 01:01:29.847597 | orchestrator | Tuesday 10 March 2026 00:56:50 +0000 (0:00:00.391) 0:07:56.131 ********* 2026-03-10 01:01:29.847600 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.847603 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.847606 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.847609 | orchestrator | 2026-03-10 01:01:29.847612 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 01:01:29.847615 | orchestrator | Tuesday 10 March 2026 00:56:51 +0000 (0:00:00.518) 0:07:56.650 ********* 2026-03-10 01:01:29.847618 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.847621 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.847624 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.847627 | orchestrator | 2026-03-10 01:01:29.847631 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-10 01:01:29.847634 | orchestrator | Tuesday 10 March 2026 00:56:52 +0000 (0:00:00.987) 0:07:57.637 ********* 2026-03-10 01:01:29.847637 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.847640 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.847643 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.847646 | orchestrator | 2026-03-10 01:01:29.847649 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-10 01:01:29.847652 | orchestrator | Tuesday 10 March 2026 00:56:52 +0000 (0:00:00.446) 0:07:58.083 ********* 2026-03-10 01:01:29.847655 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 01:01:29.847658 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:01:29.847667 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:01:29.847671 | orchestrator | 2026-03-10 01:01:29.847674 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-10 01:01:29.847681 | orchestrator | Tuesday 10 March 2026 00:56:53 +0000 (0:00:00.787) 0:07:58.871 ********* 2026-03-10 01:01:29.847684 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.847687 | orchestrator | 2026-03-10 01:01:29.847690 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-10 01:01:29.847694 | orchestrator | Tuesday 10 March 2026 00:56:53 +0000 (0:00:00.608) 0:07:59.480 ********* 2026-03-10 01:01:29.847711 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.847714 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.847721 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.847724 | orchestrator | 2026-03-10 01:01:29.847727 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-10 01:01:29.847731 | orchestrator | Tuesday 10 March 2026 00:56:54 +0000 (0:00:00.788) 0:08:00.268 ********* 2026-03-10 01:01:29.847734 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.847737 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.847740 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.847743 | orchestrator | 2026-03-10 01:01:29.847746 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-10 01:01:29.847749 | orchestrator | Tuesday 10 March 2026 00:56:55 +0000 (0:00:00.348) 0:08:00.617 ********* 2026-03-10 01:01:29.847752 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.847755 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.847758 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.847761 | orchestrator | 2026-03-10 01:01:29.847764 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-10 01:01:29.847767 | orchestrator | Tuesday 10 March 2026 00:56:55 +0000 (0:00:00.702) 0:08:01.320 ********* 2026-03-10 01:01:29.847770 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.847773 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.847776 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.847780 | orchestrator | 2026-03-10 01:01:29.847783 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-10 01:01:29.847786 | orchestrator | Tuesday 10 March 2026 00:56:56 +0000 (0:00:00.525) 0:08:01.846 ********* 2026-03-10 01:01:29.847789 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-10 01:01:29.847792 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-10 01:01:29.847795 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-10 01:01:29.847798 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-10 01:01:29.847801 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-10 01:01:29.847807 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-10 01:01:29.847810 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-10 01:01:29.847813 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-10 01:01:29.847816 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-10 01:01:29.847819 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-10 01:01:29.847825 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-10 01:01:29.847828 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-10 01:01:29.847831 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-10 01:01:29.847834 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-10 01:01:29.847837 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-10 01:01:29.847843 | orchestrator | 2026-03-10 01:01:29.847848 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-10 01:01:29.847851 | orchestrator | Tuesday 10 March 2026 00:56:59 +0000 (0:00:03.149) 0:08:04.996 ********* 2026-03-10 01:01:29.847854 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.847857 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.847860 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.847863 | orchestrator | 2026-03-10 01:01:29.847866 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-10 01:01:29.847869 | orchestrator | Tuesday 10 March 2026 00:56:59 +0000 (0:00:00.413) 0:08:05.409 ********* 2026-03-10 01:01:29.847872 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.847875 | orchestrator | 2026-03-10 01:01:29.847878 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-10 01:01:29.847881 | orchestrator | Tuesday 10 March 2026 00:57:00 +0000 (0:00:00.700) 0:08:06.111 ********* 2026-03-10 01:01:29.847884 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-10 01:01:29.847887 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-10 01:01:29.847890 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-10 01:01:29.847893 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-10 01:01:29.847896 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-10 01:01:29.847899 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-10 01:01:29.847902 | orchestrator | 2026-03-10 01:01:29.847906 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-10 01:01:29.847909 | orchestrator | Tuesday 10 March 2026 00:57:02 +0000 (0:00:01.655) 0:08:07.766 ********* 2026-03-10 01:01:29.847912 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:01:29.847915 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-10 01:01:29.847918 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 01:01:29.847921 | orchestrator | 2026-03-10 01:01:29.847924 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-10 01:01:29.847927 | orchestrator | Tuesday 10 March 2026 00:57:04 +0000 (0:00:02.181) 0:08:09.948 ********* 2026-03-10 01:01:29.847930 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 01:01:29.847933 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-10 01:01:29.847937 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.847943 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 01:01:29.847948 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-10 01:01:29.847954 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.847960 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 01:01:29.847963 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-10 01:01:29.847966 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.847969 | orchestrator | 2026-03-10 01:01:29.847972 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-10 01:01:29.847975 | orchestrator | Tuesday 10 March 2026 00:57:05 +0000 (0:00:01.175) 0:08:11.123 ********* 2026-03-10 01:01:29.847979 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:01:29.847982 | orchestrator | 2026-03-10 01:01:29.847985 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-10 01:01:29.847988 | orchestrator | Tuesday 10 March 2026 00:57:07 +0000 (0:00:02.186) 0:08:13.310 ********* 2026-03-10 01:01:29.847991 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.847994 | orchestrator | 2026-03-10 01:01:29.847997 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-10 01:01:29.848002 | orchestrator | Tuesday 10 March 2026 00:57:08 +0000 (0:00:00.716) 0:08:14.027 ********* 2026-03-10 01:01:29.848006 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-276dc5cf-0fff-57f4-b280-c3cda8556bee', 'data_vg': 'ceph-276dc5cf-0fff-57f4-b280-c3cda8556bee'}) 2026-03-10 01:01:29.848009 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e', 'data_vg': 'ceph-c7cdfd74-cae8-56d1-a0f9-4438e0fe684e'}) 2026-03-10 01:01:29.848017 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df', 'data_vg': 'ceph-c2da093f-67f0-5a54-a6a1-4e0ffcdb14df'}) 2026-03-10 01:01:29.848023 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-1c4f45a1-f837-5281-b6b5-75662d68eedd', 'data_vg': 'ceph-1c4f45a1-f837-5281-b6b5-75662d68eedd'}) 2026-03-10 01:01:29.848029 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-1e5abf04-63a5-5f41-bb2b-61caa92fdc91', 'data_vg': 'ceph-1e5abf04-63a5-5f41-bb2b-61caa92fdc91'}) 2026-03-10 01:01:29.848035 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-5a55caf6-84ae-542a-a466-02d3e6c6095e', 'data_vg': 'ceph-5a55caf6-84ae-542a-a466-02d3e6c6095e'}) 2026-03-10 01:01:29.848040 | orchestrator | 2026-03-10 01:01:29.848046 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-10 01:01:29.848051 | orchestrator | Tuesday 10 March 2026 00:57:51 +0000 (0:00:42.763) 0:08:56.790 ********* 2026-03-10 01:01:29.848056 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848061 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848066 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.848072 | orchestrator | 2026-03-10 01:01:29.848077 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-10 01:01:29.848086 | orchestrator | Tuesday 10 March 2026 00:57:51 +0000 (0:00:00.336) 0:08:57.126 ********* 2026-03-10 01:01:29.848092 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.848095 | orchestrator | 2026-03-10 01:01:29.848098 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-10 01:01:29.848101 | orchestrator | Tuesday 10 March 2026 00:57:52 +0000 (0:00:00.848) 0:08:57.975 ********* 2026-03-10 01:01:29.848104 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.848107 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.848110 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.848113 | orchestrator | 2026-03-10 01:01:29.848117 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-10 01:01:29.848120 | orchestrator | Tuesday 10 March 2026 00:57:53 +0000 (0:00:00.692) 0:08:58.667 ********* 2026-03-10 01:01:29.848123 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.848126 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.848129 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.848133 | orchestrator | 2026-03-10 01:01:29.848139 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-10 01:01:29.848145 | orchestrator | Tuesday 10 March 2026 00:57:56 +0000 (0:00:02.857) 0:09:01.525 ********* 2026-03-10 01:01:29.848148 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.848151 | orchestrator | 2026-03-10 01:01:29.848154 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-10 01:01:29.848157 | orchestrator | Tuesday 10 March 2026 00:57:56 +0000 (0:00:00.934) 0:09:02.459 ********* 2026-03-10 01:01:29.848160 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.848163 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.848167 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.848170 | orchestrator | 2026-03-10 01:01:29.848173 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-10 01:01:29.848176 | orchestrator | Tuesday 10 March 2026 00:57:58 +0000 (0:00:01.261) 0:09:03.721 ********* 2026-03-10 01:01:29.848181 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.848185 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.848188 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.848191 | orchestrator | 2026-03-10 01:01:29.848194 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-10 01:01:29.848197 | orchestrator | Tuesday 10 March 2026 00:57:59 +0000 (0:00:01.169) 0:09:04.890 ********* 2026-03-10 01:01:29.848200 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.848203 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.848206 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.848209 | orchestrator | 2026-03-10 01:01:29.848212 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-10 01:01:29.848215 | orchestrator | Tuesday 10 March 2026 00:58:01 +0000 (0:00:01.890) 0:09:06.781 ********* 2026-03-10 01:01:29.848219 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848222 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848225 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.848228 | orchestrator | 2026-03-10 01:01:29.848233 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-10 01:01:29.848238 | orchestrator | Tuesday 10 March 2026 00:58:01 +0000 (0:00:00.655) 0:09:07.436 ********* 2026-03-10 01:01:29.848244 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848250 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848255 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.848261 | orchestrator | 2026-03-10 01:01:29.848266 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-10 01:01:29.848272 | orchestrator | Tuesday 10 March 2026 00:58:02 +0000 (0:00:00.399) 0:09:07.836 ********* 2026-03-10 01:01:29.848278 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-10 01:01:29.848281 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-10 01:01:29.848284 | orchestrator | ok: [testbed-node-5] => (item=3) 2026-03-10 01:01:29.848287 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-10 01:01:29.848290 | orchestrator | ok: [testbed-node-4] => (item=4) 2026-03-10 01:01:29.848293 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-10 01:01:29.848296 | orchestrator | 2026-03-10 01:01:29.848299 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-10 01:01:29.848302 | orchestrator | Tuesday 10 March 2026 00:58:03 +0000 (0:00:01.059) 0:09:08.895 ********* 2026-03-10 01:01:29.848305 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-10 01:01:29.848308 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-10 01:01:29.848311 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-10 01:01:29.848315 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-10 01:01:29.848318 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-10 01:01:29.848323 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-10 01:01:29.848326 | orchestrator | 2026-03-10 01:01:29.848329 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-10 01:01:29.848332 | orchestrator | Tuesday 10 March 2026 00:58:05 +0000 (0:00:02.456) 0:09:11.352 ********* 2026-03-10 01:01:29.848335 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-10 01:01:29.848338 | orchestrator | changed: [testbed-node-5] => (item=3) 2026-03-10 01:01:29.848341 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-10 01:01:29.848344 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-10 01:01:29.848348 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-10 01:01:29.848351 | orchestrator | changed: [testbed-node-4] => (item=4) 2026-03-10 01:01:29.848354 | orchestrator | 2026-03-10 01:01:29.848357 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-10 01:01:29.848360 | orchestrator | Tuesday 10 March 2026 00:58:10 +0000 (0:00:04.327) 0:09:15.680 ********* 2026-03-10 01:01:29.848363 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848366 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848369 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:01:29.848374 | orchestrator | 2026-03-10 01:01:29.848378 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-10 01:01:29.848383 | orchestrator | Tuesday 10 March 2026 00:58:13 +0000 (0:00:03.093) 0:09:18.774 ********* 2026-03-10 01:01:29.848386 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848389 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848392 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-10 01:01:29.848395 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:01:29.848398 | orchestrator | 2026-03-10 01:01:29.848401 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-10 01:01:29.848404 | orchestrator | Tuesday 10 March 2026 00:58:25 +0000 (0:00:12.595) 0:09:31.369 ********* 2026-03-10 01:01:29.848408 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848411 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848414 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.848417 | orchestrator | 2026-03-10 01:01:29.848420 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-10 01:01:29.848423 | orchestrator | Tuesday 10 March 2026 00:58:27 +0000 (0:00:01.293) 0:09:32.663 ********* 2026-03-10 01:01:29.848426 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848429 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848432 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.848435 | orchestrator | 2026-03-10 01:01:29.848438 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-10 01:01:29.848441 | orchestrator | Tuesday 10 March 2026 00:58:27 +0000 (0:00:00.377) 0:09:33.041 ********* 2026-03-10 01:01:29.848444 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.848457 | orchestrator | 2026-03-10 01:01:29.848461 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-10 01:01:29.848464 | orchestrator | Tuesday 10 March 2026 00:58:28 +0000 (0:00:00.842) 0:09:33.883 ********* 2026-03-10 01:01:29.848467 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.848470 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.848473 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.848476 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848479 | orchestrator | 2026-03-10 01:01:29.848482 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-10 01:01:29.848485 | orchestrator | Tuesday 10 March 2026 00:58:28 +0000 (0:00:00.425) 0:09:34.309 ********* 2026-03-10 01:01:29.848488 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848491 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848494 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.848497 | orchestrator | 2026-03-10 01:01:29.848500 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-10 01:01:29.848503 | orchestrator | Tuesday 10 March 2026 00:58:29 +0000 (0:00:00.329) 0:09:34.638 ********* 2026-03-10 01:01:29.848506 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848510 | orchestrator | 2026-03-10 01:01:29.848513 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-10 01:01:29.848516 | orchestrator | Tuesday 10 March 2026 00:58:29 +0000 (0:00:00.283) 0:09:34.922 ********* 2026-03-10 01:01:29.848519 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848522 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848525 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.848528 | orchestrator | 2026-03-10 01:01:29.848531 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-10 01:01:29.848534 | orchestrator | Tuesday 10 March 2026 00:58:29 +0000 (0:00:00.350) 0:09:35.273 ********* 2026-03-10 01:01:29.848537 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848542 | orchestrator | 2026-03-10 01:01:29.848545 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-10 01:01:29.848548 | orchestrator | Tuesday 10 March 2026 00:58:30 +0000 (0:00:00.223) 0:09:35.496 ********* 2026-03-10 01:01:29.848551 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848554 | orchestrator | 2026-03-10 01:01:29.848557 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-10 01:01:29.848561 | orchestrator | Tuesday 10 March 2026 00:58:30 +0000 (0:00:00.234) 0:09:35.731 ********* 2026-03-10 01:01:29.848564 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848567 | orchestrator | 2026-03-10 01:01:29.848570 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-10 01:01:29.848573 | orchestrator | Tuesday 10 March 2026 00:58:30 +0000 (0:00:00.150) 0:09:35.882 ********* 2026-03-10 01:01:29.848576 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848579 | orchestrator | 2026-03-10 01:01:29.848584 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-10 01:01:29.848587 | orchestrator | Tuesday 10 March 2026 00:58:31 +0000 (0:00:00.903) 0:09:36.785 ********* 2026-03-10 01:01:29.848590 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848593 | orchestrator | 2026-03-10 01:01:29.848596 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-10 01:01:29.848599 | orchestrator | Tuesday 10 March 2026 00:58:31 +0000 (0:00:00.255) 0:09:37.041 ********* 2026-03-10 01:01:29.848602 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.848605 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.848609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.848612 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848615 | orchestrator | 2026-03-10 01:01:29.848618 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-10 01:01:29.848621 | orchestrator | Tuesday 10 March 2026 00:58:32 +0000 (0:00:00.444) 0:09:37.486 ********* 2026-03-10 01:01:29.848624 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848627 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848630 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.848633 | orchestrator | 2026-03-10 01:01:29.848636 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-10 01:01:29.848643 | orchestrator | Tuesday 10 March 2026 00:58:32 +0000 (0:00:00.351) 0:09:37.837 ********* 2026-03-10 01:01:29.848646 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848649 | orchestrator | 2026-03-10 01:01:29.848652 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-10 01:01:29.848655 | orchestrator | Tuesday 10 March 2026 00:58:32 +0000 (0:00:00.250) 0:09:38.088 ********* 2026-03-10 01:01:29.848658 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848661 | orchestrator | 2026-03-10 01:01:29.848664 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-10 01:01:29.848667 | orchestrator | 2026-03-10 01:01:29.848671 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 01:01:29.848677 | orchestrator | Tuesday 10 March 2026 00:58:33 +0000 (0:00:01.084) 0:09:39.173 ********* 2026-03-10 01:01:29.848680 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.848684 | orchestrator | 2026-03-10 01:01:29.848687 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 01:01:29.848690 | orchestrator | Tuesday 10 March 2026 00:58:35 +0000 (0:00:01.318) 0:09:40.491 ********* 2026-03-10 01:01:29.848693 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.848696 | orchestrator | 2026-03-10 01:01:29.848699 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 01:01:29.848704 | orchestrator | Tuesday 10 March 2026 00:58:36 +0000 (0:00:01.408) 0:09:41.900 ********* 2026-03-10 01:01:29.848707 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848711 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848714 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.848717 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.848720 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.848723 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.848726 | orchestrator | 2026-03-10 01:01:29.848729 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 01:01:29.848732 | orchestrator | Tuesday 10 March 2026 00:58:37 +0000 (0:00:01.192) 0:09:43.092 ********* 2026-03-10 01:01:29.848735 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.848738 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.848741 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.848745 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.848748 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.848751 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.848754 | orchestrator | 2026-03-10 01:01:29.848757 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 01:01:29.848760 | orchestrator | Tuesday 10 March 2026 00:58:38 +0000 (0:00:01.229) 0:09:44.322 ********* 2026-03-10 01:01:29.848763 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.848766 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.848769 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.848772 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.848775 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.848778 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.848782 | orchestrator | 2026-03-10 01:01:29.848785 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 01:01:29.848788 | orchestrator | Tuesday 10 March 2026 00:58:40 +0000 (0:00:01.279) 0:09:45.602 ********* 2026-03-10 01:01:29.848791 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.848794 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.848797 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.848800 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.848804 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.848809 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.848815 | orchestrator | 2026-03-10 01:01:29.848820 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 01:01:29.848826 | orchestrator | Tuesday 10 March 2026 00:58:40 +0000 (0:00:00.822) 0:09:46.424 ********* 2026-03-10 01:01:29.848830 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848833 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848836 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.848840 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.848845 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.848850 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.848856 | orchestrator | 2026-03-10 01:01:29.848861 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 01:01:29.848866 | orchestrator | Tuesday 10 March 2026 00:58:42 +0000 (0:00:01.457) 0:09:47.881 ********* 2026-03-10 01:01:29.848871 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848876 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848881 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.848884 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.848888 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.848891 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.848894 | orchestrator | 2026-03-10 01:01:29.848897 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 01:01:29.848900 | orchestrator | Tuesday 10 March 2026 00:58:43 +0000 (0:00:00.670) 0:09:48.552 ********* 2026-03-10 01:01:29.848903 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848906 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848911 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.848915 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.848918 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.848921 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.848924 | orchestrator | 2026-03-10 01:01:29.848927 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 01:01:29.848930 | orchestrator | Tuesday 10 March 2026 00:58:44 +0000 (0:00:00.961) 0:09:49.513 ********* 2026-03-10 01:01:29.848933 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.848936 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.848939 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.848942 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.848945 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.848948 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.848951 | orchestrator | 2026-03-10 01:01:29.848956 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 01:01:29.848959 | orchestrator | Tuesday 10 March 2026 00:58:45 +0000 (0:00:01.216) 0:09:50.730 ********* 2026-03-10 01:01:29.848962 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.848965 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.848968 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.848971 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.848974 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.848977 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.848980 | orchestrator | 2026-03-10 01:01:29.848983 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 01:01:29.848986 | orchestrator | Tuesday 10 March 2026 00:58:46 +0000 (0:00:01.749) 0:09:52.479 ********* 2026-03-10 01:01:29.848989 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.848992 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.848995 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.848998 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.849001 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.849004 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.849007 | orchestrator | 2026-03-10 01:01:29.849011 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 01:01:29.849014 | orchestrator | Tuesday 10 March 2026 00:58:47 +0000 (0:00:00.670) 0:09:53.149 ********* 2026-03-10 01:01:29.849017 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.849020 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.849023 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.849026 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.849029 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.849032 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.849035 | orchestrator | 2026-03-10 01:01:29.849038 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 01:01:29.849041 | orchestrator | Tuesday 10 March 2026 00:58:48 +0000 (0:00:01.063) 0:09:54.213 ********* 2026-03-10 01:01:29.849045 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.849050 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.849056 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.849060 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.849063 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.849067 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.849070 | orchestrator | 2026-03-10 01:01:29.849073 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 01:01:29.849076 | orchestrator | Tuesday 10 March 2026 00:58:49 +0000 (0:00:00.693) 0:09:54.906 ********* 2026-03-10 01:01:29.849079 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.849082 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.849085 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.849088 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.849091 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.849094 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.849100 | orchestrator | 2026-03-10 01:01:29.849103 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 01:01:29.849106 | orchestrator | Tuesday 10 March 2026 00:58:50 +0000 (0:00:00.984) 0:09:55.890 ********* 2026-03-10 01:01:29.849109 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.849112 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.849115 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.849119 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.849122 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.849125 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.849128 | orchestrator | 2026-03-10 01:01:29.849131 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 01:01:29.849134 | orchestrator | Tuesday 10 March 2026 00:58:51 +0000 (0:00:00.695) 0:09:56.585 ********* 2026-03-10 01:01:29.849137 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.849140 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.849144 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.849149 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.849155 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.849160 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.849166 | orchestrator | 2026-03-10 01:01:29.849171 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 01:01:29.849176 | orchestrator | Tuesday 10 March 2026 00:58:52 +0000 (0:00:00.976) 0:09:57.561 ********* 2026-03-10 01:01:29.849181 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.849186 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.849192 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.849197 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:01:29.849203 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:01:29.849208 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:01:29.849214 | orchestrator | 2026-03-10 01:01:29.849220 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 01:01:29.849225 | orchestrator | Tuesday 10 March 2026 00:58:52 +0000 (0:00:00.616) 0:09:58.178 ********* 2026-03-10 01:01:29.849233 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.849239 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.849243 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.849246 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.849249 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.849252 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.849256 | orchestrator | 2026-03-10 01:01:29.849261 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 01:01:29.849267 | orchestrator | Tuesday 10 March 2026 00:58:53 +0000 (0:00:00.986) 0:09:59.164 ********* 2026-03-10 01:01:29.849273 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.849280 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.849284 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.849290 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.849297 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.849303 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.849309 | orchestrator | 2026-03-10 01:01:29.849314 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 01:01:29.849320 | orchestrator | Tuesday 10 March 2026 00:58:54 +0000 (0:00:00.707) 0:09:59.871 ********* 2026-03-10 01:01:29.849325 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.849330 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.849335 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.849340 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.849345 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.849350 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.849356 | orchestrator | 2026-03-10 01:01:29.849364 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-10 01:01:29.849369 | orchestrator | Tuesday 10 March 2026 00:58:55 +0000 (0:00:01.454) 0:10:01.326 ********* 2026-03-10 01:01:29.849374 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:01:29.849383 | orchestrator | 2026-03-10 01:01:29.849388 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-10 01:01:29.849394 | orchestrator | Tuesday 10 March 2026 00:58:59 +0000 (0:00:04.003) 0:10:05.330 ********* 2026-03-10 01:01:29.849399 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:01:29.849404 | orchestrator | 2026-03-10 01:01:29.849409 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-10 01:01:29.849414 | orchestrator | Tuesday 10 March 2026 00:59:02 +0000 (0:00:02.158) 0:10:07.489 ********* 2026-03-10 01:01:29.849419 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.849425 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.849428 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.849431 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.849434 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.849437 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.849440 | orchestrator | 2026-03-10 01:01:29.849443 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-10 01:01:29.849446 | orchestrator | Tuesday 10 March 2026 00:59:04 +0000 (0:00:02.258) 0:10:09.747 ********* 2026-03-10 01:01:29.849473 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.849477 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.849480 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.849483 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.849486 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.849489 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.849492 | orchestrator | 2026-03-10 01:01:29.849495 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-10 01:01:29.849498 | orchestrator | Tuesday 10 March 2026 00:59:05 +0000 (0:00:01.064) 0:10:10.811 ********* 2026-03-10 01:01:29.849501 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.849505 | orchestrator | 2026-03-10 01:01:29.849508 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-10 01:01:29.849511 | orchestrator | Tuesday 10 March 2026 00:59:06 +0000 (0:00:01.347) 0:10:12.158 ********* 2026-03-10 01:01:29.849514 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.849517 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.849520 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.849523 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.849526 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.849529 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.849532 | orchestrator | 2026-03-10 01:01:29.849535 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-10 01:01:29.849538 | orchestrator | Tuesday 10 March 2026 00:59:08 +0000 (0:00:02.058) 0:10:14.217 ********* 2026-03-10 01:01:29.849541 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.849544 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.849547 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.849550 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.849553 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.849556 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.849559 | orchestrator | 2026-03-10 01:01:29.849562 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-10 01:01:29.849565 | orchestrator | Tuesday 10 March 2026 00:59:12 +0000 (0:00:03.354) 0:10:17.571 ********* 2026-03-10 01:01:29.849568 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:01:29.849571 | orchestrator | 2026-03-10 01:01:29.849574 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-10 01:01:29.849577 | orchestrator | Tuesday 10 March 2026 00:59:13 +0000 (0:00:01.467) 0:10:19.038 ********* 2026-03-10 01:01:29.849616 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.849620 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.849623 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.849626 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.849629 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.849632 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.849635 | orchestrator | 2026-03-10 01:01:29.849638 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-10 01:01:29.849641 | orchestrator | Tuesday 10 March 2026 00:59:14 +0000 (0:00:01.006) 0:10:20.045 ********* 2026-03-10 01:01:29.849645 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.849651 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.849655 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.849658 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:01:29.849661 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:01:29.849664 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:01:29.849667 | orchestrator | 2026-03-10 01:01:29.849670 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-10 01:01:29.849673 | orchestrator | Tuesday 10 March 2026 00:59:16 +0000 (0:00:02.354) 0:10:22.400 ********* 2026-03-10 01:01:29.849676 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.849679 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.849682 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.849685 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:01:29.849688 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:01:29.849691 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:01:29.849694 | orchestrator | 2026-03-10 01:01:29.849697 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-10 01:01:29.849700 | orchestrator | 2026-03-10 01:01:29.849704 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 01:01:29.849709 | orchestrator | Tuesday 10 March 2026 00:59:18 +0000 (0:00:01.237) 0:10:23.637 ********* 2026-03-10 01:01:29.849715 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.849721 | orchestrator | 2026-03-10 01:01:29.849729 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 01:01:29.849734 | orchestrator | Tuesday 10 March 2026 00:59:18 +0000 (0:00:00.527) 0:10:24.165 ********* 2026-03-10 01:01:29.849739 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.849744 | orchestrator | 2026-03-10 01:01:29.849750 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 01:01:29.849755 | orchestrator | Tuesday 10 March 2026 00:59:19 +0000 (0:00:00.840) 0:10:25.006 ********* 2026-03-10 01:01:29.849758 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.849761 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.849764 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.849767 | orchestrator | 2026-03-10 01:01:29.849771 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 01:01:29.849777 | orchestrator | Tuesday 10 March 2026 00:59:19 +0000 (0:00:00.352) 0:10:25.359 ********* 2026-03-10 01:01:29.849783 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.849788 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.849793 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.849799 | orchestrator | 2026-03-10 01:01:29.849804 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 01:01:29.849809 | orchestrator | Tuesday 10 March 2026 00:59:20 +0000 (0:00:00.747) 0:10:26.106 ********* 2026-03-10 01:01:29.849814 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.849820 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.849826 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.849830 | orchestrator | 2026-03-10 01:01:29.849833 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 01:01:29.849840 | orchestrator | Tuesday 10 March 2026 00:59:21 +0000 (0:00:01.104) 0:10:27.211 ********* 2026-03-10 01:01:29.849843 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.849846 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.849849 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.849852 | orchestrator | 2026-03-10 01:01:29.849855 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 01:01:29.849858 | orchestrator | Tuesday 10 March 2026 00:59:22 +0000 (0:00:00.737) 0:10:27.949 ********* 2026-03-10 01:01:29.849861 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.849864 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.849867 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.849870 | orchestrator | 2026-03-10 01:01:29.849873 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 01:01:29.849876 | orchestrator | Tuesday 10 March 2026 00:59:22 +0000 (0:00:00.336) 0:10:28.285 ********* 2026-03-10 01:01:29.849879 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.849883 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.849886 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.849889 | orchestrator | 2026-03-10 01:01:29.849892 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 01:01:29.849895 | orchestrator | Tuesday 10 March 2026 00:59:23 +0000 (0:00:00.358) 0:10:28.643 ********* 2026-03-10 01:01:29.849898 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.849901 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.849904 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.849907 | orchestrator | 2026-03-10 01:01:29.849910 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 01:01:29.849913 | orchestrator | Tuesday 10 March 2026 00:59:23 +0000 (0:00:00.653) 0:10:29.297 ********* 2026-03-10 01:01:29.849916 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.849919 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.849922 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.849925 | orchestrator | 2026-03-10 01:01:29.849928 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 01:01:29.849931 | orchestrator | Tuesday 10 March 2026 00:59:24 +0000 (0:00:00.857) 0:10:30.154 ********* 2026-03-10 01:01:29.849934 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.849937 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.849940 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.849944 | orchestrator | 2026-03-10 01:01:29.849947 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 01:01:29.849950 | orchestrator | Tuesday 10 March 2026 00:59:25 +0000 (0:00:00.805) 0:10:30.959 ********* 2026-03-10 01:01:29.849953 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.849956 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.849959 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.849962 | orchestrator | 2026-03-10 01:01:29.849965 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 01:01:29.849968 | orchestrator | Tuesday 10 March 2026 00:59:25 +0000 (0:00:00.377) 0:10:31.337 ********* 2026-03-10 01:01:29.849971 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.849976 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.849979 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.849982 | orchestrator | 2026-03-10 01:01:29.849985 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 01:01:29.849988 | orchestrator | Tuesday 10 March 2026 00:59:26 +0000 (0:00:00.781) 0:10:32.119 ********* 2026-03-10 01:01:29.849991 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.849994 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.849997 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.850001 | orchestrator | 2026-03-10 01:01:29.850004 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 01:01:29.850008 | orchestrator | Tuesday 10 March 2026 00:59:27 +0000 (0:00:00.475) 0:10:32.594 ********* 2026-03-10 01:01:29.850039 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.850044 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.850047 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.850050 | orchestrator | 2026-03-10 01:01:29.850053 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 01:01:29.850056 | orchestrator | Tuesday 10 March 2026 00:59:27 +0000 (0:00:00.383) 0:10:32.978 ********* 2026-03-10 01:01:29.850059 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.850062 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.850065 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.850068 | orchestrator | 2026-03-10 01:01:29.850071 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 01:01:29.850076 | orchestrator | Tuesday 10 March 2026 00:59:27 +0000 (0:00:00.393) 0:10:33.371 ********* 2026-03-10 01:01:29.850081 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.850086 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.850092 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.850098 | orchestrator | 2026-03-10 01:01:29.850103 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 01:01:29.850109 | orchestrator | Tuesday 10 March 2026 00:59:28 +0000 (0:00:00.657) 0:10:34.029 ********* 2026-03-10 01:01:29.850114 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.850120 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.850125 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.850130 | orchestrator | 2026-03-10 01:01:29.850135 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 01:01:29.850141 | orchestrator | Tuesday 10 March 2026 00:59:28 +0000 (0:00:00.363) 0:10:34.392 ********* 2026-03-10 01:01:29.850146 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.850151 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.850157 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.850162 | orchestrator | 2026-03-10 01:01:29.850165 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 01:01:29.850168 | orchestrator | Tuesday 10 March 2026 00:59:29 +0000 (0:00:00.450) 0:10:34.843 ********* 2026-03-10 01:01:29.850171 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.850174 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.850177 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.850180 | orchestrator | 2026-03-10 01:01:29.850183 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 01:01:29.850186 | orchestrator | Tuesday 10 March 2026 00:59:29 +0000 (0:00:00.616) 0:10:35.459 ********* 2026-03-10 01:01:29.850189 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.850192 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.850196 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.850199 | orchestrator | 2026-03-10 01:01:29.850202 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-10 01:01:29.850206 | orchestrator | Tuesday 10 March 2026 00:59:31 +0000 (0:00:01.083) 0:10:36.543 ********* 2026-03-10 01:01:29.850210 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.850213 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.850216 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-10 01:01:29.850219 | orchestrator | 2026-03-10 01:01:29.850222 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-10 01:01:29.850225 | orchestrator | Tuesday 10 March 2026 00:59:31 +0000 (0:00:00.526) 0:10:37.069 ********* 2026-03-10 01:01:29.850229 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:01:29.850232 | orchestrator | 2026-03-10 01:01:29.850235 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-10 01:01:29.850238 | orchestrator | Tuesday 10 March 2026 00:59:33 +0000 (0:00:02.203) 0:10:39.273 ********* 2026-03-10 01:01:29.850242 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-10 01:01:29.850249 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.850252 | orchestrator | 2026-03-10 01:01:29.850255 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-10 01:01:29.850258 | orchestrator | Tuesday 10 March 2026 00:59:34 +0000 (0:00:00.543) 0:10:39.817 ********* 2026-03-10 01:01:29.850262 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 01:01:29.850266 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 01:01:29.850270 | orchestrator | 2026-03-10 01:01:29.850273 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-10 01:01:29.850276 | orchestrator | Tuesday 10 March 2026 00:59:42 +0000 (0:00:08.005) 0:10:47.822 ********* 2026-03-10 01:01:29.850282 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:01:29.850285 | orchestrator | 2026-03-10 01:01:29.850288 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-10 01:01:29.850291 | orchestrator | Tuesday 10 March 2026 00:59:46 +0000 (0:00:03.730) 0:10:51.552 ********* 2026-03-10 01:01:29.850295 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.850298 | orchestrator | 2026-03-10 01:01:29.850301 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-10 01:01:29.850304 | orchestrator | Tuesday 10 March 2026 00:59:46 +0000 (0:00:00.632) 0:10:52.185 ********* 2026-03-10 01:01:29.850307 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-10 01:01:29.850312 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-10 01:01:29.850317 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-10 01:01:29.850322 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-10 01:01:29.850326 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-10 01:01:29.850331 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-10 01:01:29.850334 | orchestrator | 2026-03-10 01:01:29.850337 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-10 01:01:29.850340 | orchestrator | Tuesday 10 March 2026 00:59:47 +0000 (0:00:01.074) 0:10:53.260 ********* 2026-03-10 01:01:29.850343 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:01:29.850346 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-10 01:01:29.850349 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 01:01:29.850352 | orchestrator | 2026-03-10 01:01:29.850355 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-10 01:01:29.850358 | orchestrator | Tuesday 10 March 2026 00:59:50 +0000 (0:00:02.881) 0:10:56.141 ********* 2026-03-10 01:01:29.850361 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 01:01:29.850365 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 01:01:29.850368 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-10 01:01:29.850371 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.850374 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-10 01:01:29.850377 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.850380 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 01:01:29.850383 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-10 01:01:29.850389 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.850392 | orchestrator | 2026-03-10 01:01:29.850395 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-10 01:01:29.850398 | orchestrator | Tuesday 10 March 2026 00:59:52 +0000 (0:00:01.347) 0:10:57.488 ********* 2026-03-10 01:01:29.850401 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.850405 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.850410 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.850416 | orchestrator | 2026-03-10 01:01:29.850421 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-10 01:01:29.850427 | orchestrator | Tuesday 10 March 2026 00:59:55 +0000 (0:00:03.022) 0:11:00.511 ********* 2026-03-10 01:01:29.850432 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.850437 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.850443 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.850460 | orchestrator | 2026-03-10 01:01:29.850466 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-10 01:01:29.850471 | orchestrator | Tuesday 10 March 2026 00:59:55 +0000 (0:00:00.617) 0:11:01.128 ********* 2026-03-10 01:01:29.850476 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.850482 | orchestrator | 2026-03-10 01:01:29.850486 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-10 01:01:29.850492 | orchestrator | Tuesday 10 March 2026 00:59:56 +0000 (0:00:01.308) 0:11:02.437 ********* 2026-03-10 01:01:29.850498 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.850504 | orchestrator | 2026-03-10 01:01:29.850509 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-10 01:01:29.850515 | orchestrator | Tuesday 10 March 2026 00:59:57 +0000 (0:00:00.643) 0:11:03.080 ********* 2026-03-10 01:01:29.850520 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.850525 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.850530 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.850536 | orchestrator | 2026-03-10 01:01:29.850541 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-10 01:01:29.850547 | orchestrator | Tuesday 10 March 2026 00:59:59 +0000 (0:00:01.598) 0:11:04.679 ********* 2026-03-10 01:01:29.850552 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.850558 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.850563 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.850568 | orchestrator | 2026-03-10 01:01:29.850573 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-10 01:01:29.850579 | orchestrator | Tuesday 10 March 2026 01:00:00 +0000 (0:00:01.784) 0:11:06.463 ********* 2026-03-10 01:01:29.850585 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.850589 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.850592 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.850595 | orchestrator | 2026-03-10 01:01:29.850598 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-10 01:01:29.850601 | orchestrator | Tuesday 10 March 2026 01:00:03 +0000 (0:00:02.126) 0:11:08.590 ********* 2026-03-10 01:01:29.850604 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.850610 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.850613 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.850616 | orchestrator | 2026-03-10 01:01:29.850619 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-10 01:01:29.850624 | orchestrator | Tuesday 10 March 2026 01:00:05 +0000 (0:00:02.033) 0:11:10.624 ********* 2026-03-10 01:01:29.850628 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.850631 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.850634 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.850637 | orchestrator | 2026-03-10 01:01:29.850640 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-10 01:01:29.850648 | orchestrator | Tuesday 10 March 2026 01:00:06 +0000 (0:00:01.572) 0:11:12.197 ********* 2026-03-10 01:01:29.850651 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.850654 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.850657 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.850660 | orchestrator | 2026-03-10 01:01:29.850663 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-10 01:01:29.850667 | orchestrator | Tuesday 10 March 2026 01:00:07 +0000 (0:00:00.762) 0:11:12.959 ********* 2026-03-10 01:01:29.850670 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.850673 | orchestrator | 2026-03-10 01:01:29.850676 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-10 01:01:29.850681 | orchestrator | Tuesday 10 March 2026 01:00:08 +0000 (0:00:00.938) 0:11:13.897 ********* 2026-03-10 01:01:29.850685 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.850688 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.850691 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.850694 | orchestrator | 2026-03-10 01:01:29.850697 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-10 01:01:29.850700 | orchestrator | Tuesday 10 March 2026 01:00:08 +0000 (0:00:00.390) 0:11:14.288 ********* 2026-03-10 01:01:29.850703 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.850706 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.850709 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.850713 | orchestrator | 2026-03-10 01:01:29.850716 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-10 01:01:29.850719 | orchestrator | Tuesday 10 March 2026 01:00:10 +0000 (0:00:01.424) 0:11:15.712 ********* 2026-03-10 01:01:29.850722 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.850725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.850728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.850732 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.850738 | orchestrator | 2026-03-10 01:01:29.850743 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-10 01:01:29.850749 | orchestrator | Tuesday 10 March 2026 01:00:11 +0000 (0:00:01.145) 0:11:16.858 ********* 2026-03-10 01:01:29.850754 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.850759 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.850764 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.850770 | orchestrator | 2026-03-10 01:01:29.850774 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-10 01:01:29.850777 | orchestrator | 2026-03-10 01:01:29.850780 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-10 01:01:29.850783 | orchestrator | Tuesday 10 March 2026 01:00:12 +0000 (0:00:01.049) 0:11:17.907 ********* 2026-03-10 01:01:29.850786 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.850789 | orchestrator | 2026-03-10 01:01:29.850792 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-10 01:01:29.850795 | orchestrator | Tuesday 10 March 2026 01:00:13 +0000 (0:00:00.729) 0:11:18.637 ********* 2026-03-10 01:01:29.850799 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.850802 | orchestrator | 2026-03-10 01:01:29.850805 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-10 01:01:29.850808 | orchestrator | Tuesday 10 March 2026 01:00:14 +0000 (0:00:00.970) 0:11:19.608 ********* 2026-03-10 01:01:29.850811 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.850814 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.850817 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.850821 | orchestrator | 2026-03-10 01:01:29.850831 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-10 01:01:29.850836 | orchestrator | Tuesday 10 March 2026 01:00:14 +0000 (0:00:00.500) 0:11:20.108 ********* 2026-03-10 01:01:29.850841 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.850844 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.850847 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.850850 | orchestrator | 2026-03-10 01:01:29.850853 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-10 01:01:29.850856 | orchestrator | Tuesday 10 March 2026 01:00:15 +0000 (0:00:00.772) 0:11:20.881 ********* 2026-03-10 01:01:29.850859 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.850862 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.850865 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.850868 | orchestrator | 2026-03-10 01:01:29.850871 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-10 01:01:29.850874 | orchestrator | Tuesday 10 March 2026 01:00:16 +0000 (0:00:01.160) 0:11:22.042 ********* 2026-03-10 01:01:29.850877 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.850880 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.850883 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.850886 | orchestrator | 2026-03-10 01:01:29.850890 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-10 01:01:29.850893 | orchestrator | Tuesday 10 March 2026 01:00:17 +0000 (0:00:00.787) 0:11:22.829 ********* 2026-03-10 01:01:29.850896 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.850899 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.850902 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.850905 | orchestrator | 2026-03-10 01:01:29.850915 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-10 01:01:29.850920 | orchestrator | Tuesday 10 March 2026 01:00:17 +0000 (0:00:00.364) 0:11:23.194 ********* 2026-03-10 01:01:29.850925 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.850932 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.850939 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.850946 | orchestrator | 2026-03-10 01:01:29.850952 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-10 01:01:29.850958 | orchestrator | Tuesday 10 March 2026 01:00:18 +0000 (0:00:00.349) 0:11:23.544 ********* 2026-03-10 01:01:29.850963 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.850968 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.850973 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.850978 | orchestrator | 2026-03-10 01:01:29.850984 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-10 01:01:29.850989 | orchestrator | Tuesday 10 March 2026 01:00:18 +0000 (0:00:00.670) 0:11:24.214 ********* 2026-03-10 01:01:29.850994 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.850999 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.851004 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.851009 | orchestrator | 2026-03-10 01:01:29.851014 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-10 01:01:29.851020 | orchestrator | Tuesday 10 March 2026 01:00:19 +0000 (0:00:00.788) 0:11:25.003 ********* 2026-03-10 01:01:29.851028 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.851034 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.851039 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.851045 | orchestrator | 2026-03-10 01:01:29.851050 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-10 01:01:29.851054 | orchestrator | Tuesday 10 March 2026 01:00:20 +0000 (0:00:00.833) 0:11:25.837 ********* 2026-03-10 01:01:29.851058 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.851061 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.851064 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.851067 | orchestrator | 2026-03-10 01:01:29.851070 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-10 01:01:29.851073 | orchestrator | Tuesday 10 March 2026 01:00:20 +0000 (0:00:00.355) 0:11:26.192 ********* 2026-03-10 01:01:29.851079 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.851082 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.851085 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.851088 | orchestrator | 2026-03-10 01:01:29.851091 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-10 01:01:29.851094 | orchestrator | Tuesday 10 March 2026 01:00:21 +0000 (0:00:00.651) 0:11:26.844 ********* 2026-03-10 01:01:29.851097 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.851100 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.851103 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.851106 | orchestrator | 2026-03-10 01:01:29.851109 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-10 01:01:29.851112 | orchestrator | Tuesday 10 March 2026 01:00:21 +0000 (0:00:00.390) 0:11:27.235 ********* 2026-03-10 01:01:29.851115 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.851119 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.851122 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.851125 | orchestrator | 2026-03-10 01:01:29.851145 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-10 01:01:29.851153 | orchestrator | Tuesday 10 March 2026 01:00:22 +0000 (0:00:00.386) 0:11:27.621 ********* 2026-03-10 01:01:29.851156 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.851159 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.851162 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.851165 | orchestrator | 2026-03-10 01:01:29.851168 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-10 01:01:29.851171 | orchestrator | Tuesday 10 March 2026 01:00:22 +0000 (0:00:00.386) 0:11:28.008 ********* 2026-03-10 01:01:29.851175 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.851178 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.851181 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.851184 | orchestrator | 2026-03-10 01:01:29.851187 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-10 01:01:29.851190 | orchestrator | Tuesday 10 March 2026 01:00:23 +0000 (0:00:00.738) 0:11:28.746 ********* 2026-03-10 01:01:29.851193 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.851196 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.851199 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.851202 | orchestrator | 2026-03-10 01:01:29.851206 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-10 01:01:29.851209 | orchestrator | Tuesday 10 March 2026 01:00:23 +0000 (0:00:00.419) 0:11:29.166 ********* 2026-03-10 01:01:29.851212 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.851215 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.851218 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.851221 | orchestrator | 2026-03-10 01:01:29.851224 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-10 01:01:29.851227 | orchestrator | Tuesday 10 March 2026 01:00:24 +0000 (0:00:00.376) 0:11:29.542 ********* 2026-03-10 01:01:29.851230 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.851233 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.851236 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.851239 | orchestrator | 2026-03-10 01:01:29.851242 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-10 01:01:29.851246 | orchestrator | Tuesday 10 March 2026 01:00:24 +0000 (0:00:00.379) 0:11:29.921 ********* 2026-03-10 01:01:29.851249 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.851252 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.851255 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.851258 | orchestrator | 2026-03-10 01:01:29.851261 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-10 01:01:29.851264 | orchestrator | Tuesday 10 March 2026 01:00:25 +0000 (0:00:00.925) 0:11:30.847 ********* 2026-03-10 01:01:29.851267 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.851273 | orchestrator | 2026-03-10 01:01:29.851276 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-10 01:01:29.851284 | orchestrator | Tuesday 10 March 2026 01:00:26 +0000 (0:00:00.657) 0:11:31.505 ********* 2026-03-10 01:01:29.851288 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:01:29.851291 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-10 01:01:29.851294 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 01:01:29.851297 | orchestrator | 2026-03-10 01:01:29.851300 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-10 01:01:29.851304 | orchestrator | Tuesday 10 March 2026 01:00:28 +0000 (0:00:02.210) 0:11:33.716 ********* 2026-03-10 01:01:29.851307 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 01:01:29.851310 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-10 01:01:29.851313 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.851316 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 01:01:29.851319 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-10 01:01:29.851322 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.851325 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 01:01:29.851328 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-10 01:01:29.851331 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.851334 | orchestrator | 2026-03-10 01:01:29.851337 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-10 01:01:29.851343 | orchestrator | Tuesday 10 March 2026 01:00:29 +0000 (0:00:01.633) 0:11:35.350 ********* 2026-03-10 01:01:29.851346 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.851349 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.851352 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.851355 | orchestrator | 2026-03-10 01:01:29.851358 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-10 01:01:29.851361 | orchestrator | Tuesday 10 March 2026 01:00:30 +0000 (0:00:00.363) 0:11:35.713 ********* 2026-03-10 01:01:29.851364 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.851367 | orchestrator | 2026-03-10 01:01:29.851370 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-10 01:01:29.851373 | orchestrator | Tuesday 10 March 2026 01:00:30 +0000 (0:00:00.607) 0:11:36.321 ********* 2026-03-10 01:01:29.851377 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.851380 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.851383 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.851386 | orchestrator | 2026-03-10 01:01:29.851389 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-10 01:01:29.851392 | orchestrator | Tuesday 10 March 2026 01:00:32 +0000 (0:00:01.633) 0:11:37.955 ********* 2026-03-10 01:01:29.851395 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:01:29.851399 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-10 01:01:29.851402 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:01:29.851405 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-10 01:01:29.851411 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:01:29.851414 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-10 01:01:29.851420 | orchestrator | 2026-03-10 01:01:29.851425 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-10 01:01:29.851430 | orchestrator | Tuesday 10 March 2026 01:00:38 +0000 (0:00:05.791) 0:11:43.746 ********* 2026-03-10 01:01:29.851436 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:01:29.851441 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 01:01:29.851445 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:01:29.851459 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 01:01:29.851464 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:01:29.851470 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 01:01:29.851475 | orchestrator | 2026-03-10 01:01:29.851480 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-10 01:01:29.851486 | orchestrator | Tuesday 10 March 2026 01:00:40 +0000 (0:00:02.417) 0:11:46.164 ********* 2026-03-10 01:01:29.851489 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 01:01:29.851492 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.851495 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 01:01:29.851499 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.851505 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 01:01:29.851511 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.851516 | orchestrator | 2026-03-10 01:01:29.851520 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-10 01:01:29.851526 | orchestrator | Tuesday 10 March 2026 01:00:41 +0000 (0:00:01.243) 0:11:47.408 ********* 2026-03-10 01:01:29.851529 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-10 01:01:29.851532 | orchestrator | 2026-03-10 01:01:29.851535 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-10 01:01:29.851538 | orchestrator | Tuesday 10 March 2026 01:00:42 +0000 (0:00:00.284) 0:11:47.692 ********* 2026-03-10 01:01:29.851541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 01:01:29.851545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 01:01:29.851548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 01:01:29.851551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 01:01:29.851556 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 01:01:29.851559 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.851562 | orchestrator | 2026-03-10 01:01:29.851566 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-10 01:01:29.851569 | orchestrator | Tuesday 10 March 2026 01:00:43 +0000 (0:00:01.408) 0:11:49.101 ********* 2026-03-10 01:01:29.851572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 01:01:29.851575 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 01:01:29.851578 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 01:01:29.851588 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 01:01:29.851593 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-10 01:01:29.851599 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.851604 | orchestrator | 2026-03-10 01:01:29.851610 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-10 01:01:29.851613 | orchestrator | Tuesday 10 March 2026 01:00:44 +0000 (0:00:00.666) 0:11:49.768 ********* 2026-03-10 01:01:29.851616 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-10 01:01:29.851619 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-10 01:01:29.851622 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-10 01:01:29.851625 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-10 01:01:29.851629 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-10 01:01:29.851634 | orchestrator | 2026-03-10 01:01:29.851638 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-10 01:01:29.851641 | orchestrator | Tuesday 10 March 2026 01:01:15 +0000 (0:00:31.590) 0:12:21.358 ********* 2026-03-10 01:01:29.851644 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.851647 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.851650 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.851653 | orchestrator | 2026-03-10 01:01:29.851656 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-10 01:01:29.851659 | orchestrator | Tuesday 10 March 2026 01:01:16 +0000 (0:00:00.352) 0:12:21.711 ********* 2026-03-10 01:01:29.851662 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.851665 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.851668 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.851671 | orchestrator | 2026-03-10 01:01:29.851675 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-10 01:01:29.851678 | orchestrator | Tuesday 10 March 2026 01:01:16 +0000 (0:00:00.359) 0:12:22.070 ********* 2026-03-10 01:01:29.851681 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.851684 | orchestrator | 2026-03-10 01:01:29.851687 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-10 01:01:29.851690 | orchestrator | Tuesday 10 March 2026 01:01:17 +0000 (0:00:00.920) 0:12:22.991 ********* 2026-03-10 01:01:29.851693 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.851696 | orchestrator | 2026-03-10 01:01:29.851701 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-10 01:01:29.851705 | orchestrator | Tuesday 10 March 2026 01:01:18 +0000 (0:00:00.546) 0:12:23.538 ********* 2026-03-10 01:01:29.851708 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.851711 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.851714 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.851717 | orchestrator | 2026-03-10 01:01:29.851720 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-10 01:01:29.851723 | orchestrator | Tuesday 10 March 2026 01:01:19 +0000 (0:00:01.297) 0:12:24.835 ********* 2026-03-10 01:01:29.851726 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.851733 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.851736 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.851739 | orchestrator | 2026-03-10 01:01:29.851742 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-10 01:01:29.851745 | orchestrator | Tuesday 10 March 2026 01:01:20 +0000 (0:00:01.565) 0:12:26.401 ********* 2026-03-10 01:01:29.851748 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:01:29.851754 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:01:29.851759 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:01:29.851765 | orchestrator | 2026-03-10 01:01:29.851770 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-10 01:01:29.851776 | orchestrator | Tuesday 10 March 2026 01:01:22 +0000 (0:00:01.939) 0:12:28.341 ********* 2026-03-10 01:01:29.851783 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.851788 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.851793 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-10 01:01:29.851799 | orchestrator | 2026-03-10 01:01:29.851804 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-10 01:01:29.851810 | orchestrator | Tuesday 10 March 2026 01:01:25 +0000 (0:00:02.874) 0:12:31.216 ********* 2026-03-10 01:01:29.851815 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.851821 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.851827 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.851832 | orchestrator | 2026-03-10 01:01:29.851837 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-10 01:01:29.851842 | orchestrator | Tuesday 10 March 2026 01:01:26 +0000 (0:00:00.417) 0:12:31.634 ********* 2026-03-10 01:01:29.851847 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:01:29.851852 | orchestrator | 2026-03-10 01:01:29.851857 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-10 01:01:29.851862 | orchestrator | Tuesday 10 March 2026 01:01:26 +0000 (0:00:00.537) 0:12:32.171 ********* 2026-03-10 01:01:29.851868 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.851873 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.851879 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.851883 | orchestrator | 2026-03-10 01:01:29.851886 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-10 01:01:29.851889 | orchestrator | Tuesday 10 March 2026 01:01:27 +0000 (0:00:00.560) 0:12:32.732 ********* 2026-03-10 01:01:29.851892 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.851895 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:01:29.851898 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:01:29.851901 | orchestrator | 2026-03-10 01:01:29.851904 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-10 01:01:29.851907 | orchestrator | Tuesday 10 March 2026 01:01:27 +0000 (0:00:00.406) 0:12:33.138 ********* 2026-03-10 01:01:29.851910 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:01:29.851913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:01:29.851916 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:01:29.851919 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:01:29.851922 | orchestrator | 2026-03-10 01:01:29.851925 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-10 01:01:29.851928 | orchestrator | Tuesday 10 March 2026 01:01:28 +0000 (0:00:00.629) 0:12:33.768 ********* 2026-03-10 01:01:29.851931 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:01:29.851934 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:01:29.851937 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:01:29.851943 | orchestrator | 2026-03-10 01:01:29.851946 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:01:29.851949 | orchestrator | testbed-node-0 : ok=134  changed=34  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-10 01:01:29.851952 | orchestrator | testbed-node-1 : ok=127  changed=32  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-10 01:01:29.851955 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-10 01:01:29.851959 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-10 01:01:29.851962 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-10 01:01:29.851968 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-10 01:01:29.851971 | orchestrator | 2026-03-10 01:01:29.851974 | orchestrator | 2026-03-10 01:01:29.851977 | orchestrator | 2026-03-10 01:01:29.851980 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:01:29.851983 | orchestrator | Tuesday 10 March 2026 01:01:28 +0000 (0:00:00.244) 0:12:34.013 ********* 2026-03-10 01:01:29.851986 | orchestrator | =============================================================================== 2026-03-10 01:01:29.851989 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 47.29s 2026-03-10 01:01:29.851992 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 42.76s 2026-03-10 01:01:29.851995 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.59s 2026-03-10 01:01:29.851998 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 30.26s 2026-03-10 01:01:29.852001 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.92s 2026-03-10 01:01:29.852004 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 15.31s 2026-03-10 01:01:29.852007 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.60s 2026-03-10 01:01:29.852010 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.43s 2026-03-10 01:01:29.852018 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.91s 2026-03-10 01:01:29.852023 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.01s 2026-03-10 01:01:29.852028 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.87s 2026-03-10 01:01:29.852033 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.75s 2026-03-10 01:01:29.852037 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 5.79s 2026-03-10 01:01:29.852042 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.54s 2026-03-10 01:01:29.852047 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.93s 2026-03-10 01:01:29.852052 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 4.75s 2026-03-10 01:01:29.852057 | orchestrator | ceph-container-common : Enable ceph.target ------------------------------ 4.41s 2026-03-10 01:01:29.852062 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 4.33s 2026-03-10 01:01:29.852066 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 4.00s 2026-03-10 01:01:29.852071 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.73s 2026-03-10 01:01:29.852077 | orchestrator | 2026-03-10 01:01:29 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:29.852082 | orchestrator | 2026-03-10 01:01:29 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:29.852092 | orchestrator | 2026-03-10 01:01:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:32.888775 | orchestrator | 2026-03-10 01:01:32 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:32.890797 | orchestrator | 2026-03-10 01:01:32 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:01:32.892894 | orchestrator | 2026-03-10 01:01:32 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:32.893123 | orchestrator | 2026-03-10 01:01:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:35.934592 | orchestrator | 2026-03-10 01:01:35 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:35.935846 | orchestrator | 2026-03-10 01:01:35 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:01:35.936880 | orchestrator | 2026-03-10 01:01:35 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:35.936912 | orchestrator | 2026-03-10 01:01:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:38.981296 | orchestrator | 2026-03-10 01:01:38 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:38.983820 | orchestrator | 2026-03-10 01:01:38 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:01:38.985707 | orchestrator | 2026-03-10 01:01:38 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:38.985784 | orchestrator | 2026-03-10 01:01:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:42.030980 | orchestrator | 2026-03-10 01:01:42 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:42.031636 | orchestrator | 2026-03-10 01:01:42 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:01:42.033206 | orchestrator | 2026-03-10 01:01:42 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:42.033262 | orchestrator | 2026-03-10 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:45.087002 | orchestrator | 2026-03-10 01:01:45 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:45.088060 | orchestrator | 2026-03-10 01:01:45 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:01:45.089576 | orchestrator | 2026-03-10 01:01:45 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:45.089633 | orchestrator | 2026-03-10 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:48.141816 | orchestrator | 2026-03-10 01:01:48 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:48.144734 | orchestrator | 2026-03-10 01:01:48 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:01:48.146159 | orchestrator | 2026-03-10 01:01:48 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:48.146280 | orchestrator | 2026-03-10 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:51.191497 | orchestrator | 2026-03-10 01:01:51 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:51.192106 | orchestrator | 2026-03-10 01:01:51 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:01:51.193225 | orchestrator | 2026-03-10 01:01:51 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:51.193309 | orchestrator | 2026-03-10 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:54.232961 | orchestrator | 2026-03-10 01:01:54 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:54.236002 | orchestrator | 2026-03-10 01:01:54 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:01:54.238640 | orchestrator | 2026-03-10 01:01:54 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:54.238693 | orchestrator | 2026-03-10 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:01:57.281117 | orchestrator | 2026-03-10 01:01:57 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:01:57.283628 | orchestrator | 2026-03-10 01:01:57 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:01:57.285899 | orchestrator | 2026-03-10 01:01:57 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:01:57.285977 | orchestrator | 2026-03-10 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:00.334997 | orchestrator | 2026-03-10 01:02:00 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:02:00.336997 | orchestrator | 2026-03-10 01:02:00 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:00.339143 | orchestrator | 2026-03-10 01:02:00 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:02:00.339184 | orchestrator | 2026-03-10 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:03.398973 | orchestrator | 2026-03-10 01:02:03 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:02:03.400221 | orchestrator | 2026-03-10 01:02:03 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:03.401768 | orchestrator | 2026-03-10 01:02:03 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:02:03.401801 | orchestrator | 2026-03-10 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:06.450429 | orchestrator | 2026-03-10 01:02:06 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:02:06.453174 | orchestrator | 2026-03-10 01:02:06 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:06.455127 | orchestrator | 2026-03-10 01:02:06 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:02:06.455189 | orchestrator | 2026-03-10 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:09.508355 | orchestrator | 2026-03-10 01:02:09 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:02:09.509689 | orchestrator | 2026-03-10 01:02:09 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:09.511240 | orchestrator | 2026-03-10 01:02:09 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:02:09.511321 | orchestrator | 2026-03-10 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:12.560407 | orchestrator | 2026-03-10 01:02:12 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:02:12.562201 | orchestrator | 2026-03-10 01:02:12 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:12.564797 | orchestrator | 2026-03-10 01:02:12 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:02:12.564883 | orchestrator | 2026-03-10 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:15.609210 | orchestrator | 2026-03-10 01:02:15 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:02:15.610344 | orchestrator | 2026-03-10 01:02:15 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:15.611629 | orchestrator | 2026-03-10 01:02:15 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:02:15.612160 | orchestrator | 2026-03-10 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:18.672901 | orchestrator | 2026-03-10 01:02:18 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:02:18.674736 | orchestrator | 2026-03-10 01:02:18 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:18.676355 | orchestrator | 2026-03-10 01:02:18 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:02:18.676391 | orchestrator | 2026-03-10 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:21.722185 | orchestrator | 2026-03-10 01:02:21 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:02:21.723800 | orchestrator | 2026-03-10 01:02:21 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:21.725769 | orchestrator | 2026-03-10 01:02:21 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:02:21.725800 | orchestrator | 2026-03-10 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:24.764054 | orchestrator | 2026-03-10 01:02:24 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:02:24.766107 | orchestrator | 2026-03-10 01:02:24 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:24.768132 | orchestrator | 2026-03-10 01:02:24 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:02:24.768209 | orchestrator | 2026-03-10 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:27.807876 | orchestrator | 2026-03-10 01:02:27 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:02:27.808383 | orchestrator | 2026-03-10 01:02:27 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:27.808399 | orchestrator | 2026-03-10 01:02:27 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:02:27.808405 | orchestrator | 2026-03-10 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:30.852915 | orchestrator | 2026-03-10 01:02:30 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:02:30.854912 | orchestrator | 2026-03-10 01:02:30 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:30.855609 | orchestrator | 2026-03-10 01:02:30 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:02:30.855639 | orchestrator | 2026-03-10 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:33.909178 | orchestrator | 2026-03-10 01:02:33 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state STARTED 2026-03-10 01:02:33.910530 | orchestrator | 2026-03-10 01:02:33 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:33.913292 | orchestrator | 2026-03-10 01:02:33 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state STARTED 2026-03-10 01:02:33.913375 | orchestrator | 2026-03-10 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:36.953852 | orchestrator | 2026-03-10 01:02:36 | INFO  | Task d9c72e11-5900-42cc-b16c-d79be021e929 is in state SUCCESS 2026-03-10 01:02:36.955823 | orchestrator | 2026-03-10 01:02:36.955864 | orchestrator | 2026-03-10 01:02:36.955876 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-10 01:02:36.955888 | orchestrator | 2026-03-10 01:02:36.955899 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-10 01:02:36.955910 | orchestrator | Tuesday 10 March 2026 00:59:20 +0000 (0:00:00.094) 0:00:00.094 ********* 2026-03-10 01:02:36.955922 | orchestrator | ok: [localhost] => { 2026-03-10 01:02:36.955934 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-10 01:02:36.955945 | orchestrator | } 2026-03-10 01:02:36.955956 | orchestrator | 2026-03-10 01:02:36.955968 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-10 01:02:36.955979 | orchestrator | Tuesday 10 March 2026 00:59:20 +0000 (0:00:00.052) 0:00:00.147 ********* 2026-03-10 01:02:36.955990 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-10 01:02:36.956000 | orchestrator | ...ignoring 2026-03-10 01:02:36.956010 | orchestrator | 2026-03-10 01:02:36.956020 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-10 01:02:36.956030 | orchestrator | Tuesday 10 March 2026 00:59:23 +0000 (0:00:03.041) 0:00:03.189 ********* 2026-03-10 01:02:36.956040 | orchestrator | skipping: [localhost] 2026-03-10 01:02:36.956049 | orchestrator | 2026-03-10 01:02:36.956059 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-10 01:02:36.956068 | orchestrator | Tuesday 10 March 2026 00:59:23 +0000 (0:00:00.061) 0:00:03.250 ********* 2026-03-10 01:02:36.956078 | orchestrator | ok: [localhost] 2026-03-10 01:02:36.956088 | orchestrator | 2026-03-10 01:02:36.956098 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:02:36.956108 | orchestrator | 2026-03-10 01:02:36.956117 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:02:36.956133 | orchestrator | Tuesday 10 March 2026 00:59:24 +0000 (0:00:00.178) 0:00:03.429 ********* 2026-03-10 01:02:36.956143 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.956152 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:02:36.956162 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:02:36.956171 | orchestrator | 2026-03-10 01:02:36.956181 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:02:36.956191 | orchestrator | Tuesday 10 March 2026 00:59:24 +0000 (0:00:00.346) 0:00:03.775 ********* 2026-03-10 01:02:36.956200 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-10 01:02:36.956211 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-10 01:02:36.956220 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-10 01:02:36.956230 | orchestrator | 2026-03-10 01:02:36.956240 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-10 01:02:36.956250 | orchestrator | 2026-03-10 01:02:36.956260 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-10 01:02:36.956269 | orchestrator | Tuesday 10 March 2026 00:59:24 +0000 (0:00:00.592) 0:00:04.368 ********* 2026-03-10 01:02:36.956279 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-10 01:02:36.956289 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-10 01:02:36.956300 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-10 01:02:36.956309 | orchestrator | 2026-03-10 01:02:36.956319 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-10 01:02:36.956329 | orchestrator | Tuesday 10 March 2026 00:59:25 +0000 (0:00:00.472) 0:00:04.841 ********* 2026-03-10 01:02:36.956339 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:02:36.956349 | orchestrator | 2026-03-10 01:02:36.956359 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-10 01:02:36.956379 | orchestrator | Tuesday 10 March 2026 00:59:26 +0000 (0:00:01.046) 0:00:05.888 ********* 2026-03-10 01:02:36.956404 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 01:02:36.956422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 01:02:36.956450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 01:02:36.956467 | orchestrator | 2026-03-10 01:02:36.956481 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-10 01:02:36.956492 | orchestrator | Tuesday 10 March 2026 00:59:30 +0000 (0:00:04.301) 0:00:10.189 ********* 2026-03-10 01:02:36.956502 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.956513 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.956523 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.956533 | orchestrator | 2026-03-10 01:02:36.956543 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-10 01:02:36.956553 | orchestrator | Tuesday 10 March 2026 00:59:31 +0000 (0:00:00.828) 0:00:11.018 ********* 2026-03-10 01:02:36.956563 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.956573 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.956583 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.956593 | orchestrator | 2026-03-10 01:02:36.956603 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-10 01:02:36.956613 | orchestrator | Tuesday 10 March 2026 00:59:33 +0000 (0:00:01.594) 0:00:12.613 ********* 2026-03-10 01:02:36.956626 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 01:02:36.956649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 01:02:36.956661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 01:02:36.956679 | orchestrator | 2026-03-10 01:02:36.956689 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-10 01:02:36.956699 | orchestrator | Tuesday 10 March 2026 00:59:37 +0000 (0:00:04.179) 0:00:16.792 ********* 2026-03-10 01:02:36.956707 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.956715 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.956723 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.956732 | orchestrator | 2026-03-10 01:02:36.956758 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-10 01:02:36.956768 | orchestrator | Tuesday 10 March 2026 00:59:38 +0000 (0:00:01.181) 0:00:17.974 ********* 2026-03-10 01:02:36.956777 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.956786 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:02:36.956795 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:02:36.956804 | orchestrator | 2026-03-10 01:02:36.956813 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-10 01:02:36.956823 | orchestrator | Tuesday 10 March 2026 00:59:43 +0000 (0:00:04.946) 0:00:22.920 ********* 2026-03-10 01:02:36.956832 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:02:36.956841 | orchestrator | 2026-03-10 01:02:36.956850 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-10 01:02:36.956859 | orchestrator | Tuesday 10 March 2026 00:59:44 +0000 (0:00:00.566) 0:00:23.487 ********* 2026-03-10 01:02:36.956876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 01:02:36.956886 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.956899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 01:02:36.956915 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.956930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 01:02:36.956940 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.956949 | orchestrator | 2026-03-10 01:02:36.956958 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-10 01:02:36.956967 | orchestrator | Tuesday 10 March 2026 00:59:47 +0000 (0:00:03.343) 0:00:26.830 ********* 2026-03-10 01:02:36.956980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 01:02:36.956999 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.957013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 01:02:36.957023 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.957036 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 01:02:36.957051 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.957061 | orchestrator | 2026-03-10 01:02:36.957070 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-10 01:02:36.957078 | orchestrator | Tuesday 10 March 2026 00:59:51 +0000 (0:00:04.298) 0:00:31.129 ********* 2026-03-10 01:02:36.957088 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 01:02:36.957098 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.957120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 01:02:36.957135 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.957145 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-10 01:02:36.957154 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.957163 | orchestrator | 2026-03-10 01:02:36.957172 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-10 01:02:36.957182 | orchestrator | Tuesday 10 March 2026 00:59:54 +0000 (0:00:03.055) 0:00:34.184 ********* 2026-03-10 01:02:36.957200 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 01:02:36.957215 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 01:02:36.957232 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-10 01:02:36.957246 | orchestrator | 2026-03-10 01:02:36.957255 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-10 01:02:36.957265 | orchestrator | Tuesday 10 March 2026 00:59:59 +0000 (0:00:04.280) 0:00:38.464 ********* 2026-03-10 01:02:36.957273 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.957282 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:02:36.957294 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:02:36.957303 | orchestrator | 2026-03-10 01:02:36.957312 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-10 01:02:36.957321 | orchestrator | Tuesday 10 March 2026 01:00:00 +0000 (0:00:00.940) 0:00:39.404 ********* 2026-03-10 01:02:36.957331 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.957339 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:02:36.957348 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:02:36.957356 | orchestrator | 2026-03-10 01:02:36.957366 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-10 01:02:36.957375 | orchestrator | Tuesday 10 March 2026 01:00:00 +0000 (0:00:00.963) 0:00:40.368 ********* 2026-03-10 01:02:36.957384 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.957393 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:02:36.957402 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:02:36.957411 | orchestrator | 2026-03-10 01:02:36.957420 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-10 01:02:36.957440 | orchestrator | Tuesday 10 March 2026 01:00:01 +0000 (0:00:00.467) 0:00:40.836 ********* 2026-03-10 01:02:36.957450 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-10 01:02:36.957459 | orchestrator | ...ignoring 2026-03-10 01:02:36.957469 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-10 01:02:36.957478 | orchestrator | ...ignoring 2026-03-10 01:02:36.957487 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-10 01:02:36.957496 | orchestrator | ...ignoring 2026-03-10 01:02:36.957505 | orchestrator | 2026-03-10 01:02:36.957514 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-10 01:02:36.957523 | orchestrator | Tuesday 10 March 2026 01:00:12 +0000 (0:00:10.997) 0:00:51.833 ********* 2026-03-10 01:02:36.957533 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.957542 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:02:36.957551 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:02:36.957560 | orchestrator | 2026-03-10 01:02:36.957569 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-10 01:02:36.957578 | orchestrator | Tuesday 10 March 2026 01:00:13 +0000 (0:00:00.567) 0:00:52.401 ********* 2026-03-10 01:02:36.957587 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.957596 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.957605 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.957614 | orchestrator | 2026-03-10 01:02:36.957623 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-10 01:02:36.957632 | orchestrator | Tuesday 10 March 2026 01:00:13 +0000 (0:00:00.830) 0:00:53.232 ********* 2026-03-10 01:02:36.957641 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.957650 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.957732 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.957742 | orchestrator | 2026-03-10 01:02:36.957750 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-10 01:02:36.957760 | orchestrator | Tuesday 10 March 2026 01:00:14 +0000 (0:00:00.599) 0:00:53.831 ********* 2026-03-10 01:02:36.957768 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.957778 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.957787 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.957796 | orchestrator | 2026-03-10 01:02:36.957805 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-10 01:02:36.957814 | orchestrator | Tuesday 10 March 2026 01:00:14 +0000 (0:00:00.517) 0:00:54.348 ********* 2026-03-10 01:02:36.957822 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.957831 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:02:36.957840 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:02:36.957849 | orchestrator | 2026-03-10 01:02:36.957858 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-10 01:02:36.957867 | orchestrator | Tuesday 10 March 2026 01:00:15 +0000 (0:00:00.579) 0:00:54.928 ********* 2026-03-10 01:02:36.957882 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.957891 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.957900 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.957909 | orchestrator | 2026-03-10 01:02:36.957918 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-10 01:02:36.957928 | orchestrator | Tuesday 10 March 2026 01:00:16 +0000 (0:00:00.875) 0:00:55.803 ********* 2026-03-10 01:02:36.957937 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.957946 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.957954 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-10 01:02:36.957963 | orchestrator | 2026-03-10 01:02:36.957973 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-10 01:02:36.957982 | orchestrator | Tuesday 10 March 2026 01:00:16 +0000 (0:00:00.496) 0:00:56.300 ********* 2026-03-10 01:02:36.957991 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.958000 | orchestrator | 2026-03-10 01:02:36.958009 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-10 01:02:36.958071 | orchestrator | Tuesday 10 March 2026 01:00:28 +0000 (0:00:11.495) 0:01:07.796 ********* 2026-03-10 01:02:36.958081 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.958091 | orchestrator | 2026-03-10 01:02:36.958100 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-10 01:02:36.958109 | orchestrator | Tuesday 10 March 2026 01:00:28 +0000 (0:00:00.142) 0:01:07.939 ********* 2026-03-10 01:02:36.958118 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.958127 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.958136 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.958145 | orchestrator | 2026-03-10 01:02:36.958154 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-10 01:02:36.958163 | orchestrator | Tuesday 10 March 2026 01:00:29 +0000 (0:00:01.155) 0:01:09.094 ********* 2026-03-10 01:02:36.958172 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.958181 | orchestrator | 2026-03-10 01:02:36.958195 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-10 01:02:36.958204 | orchestrator | Tuesday 10 March 2026 01:00:38 +0000 (0:00:08.399) 0:01:17.494 ********* 2026-03-10 01:02:36.958213 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.958222 | orchestrator | 2026-03-10 01:02:36.958231 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-10 01:02:36.958241 | orchestrator | Tuesday 10 March 2026 01:00:39 +0000 (0:00:01.811) 0:01:19.306 ********* 2026-03-10 01:02:36.958250 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.958259 | orchestrator | 2026-03-10 01:02:36.958268 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-10 01:02:36.958282 | orchestrator | Tuesday 10 March 2026 01:00:42 +0000 (0:00:02.805) 0:01:22.112 ********* 2026-03-10 01:02:36.958292 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.958301 | orchestrator | 2026-03-10 01:02:36.958310 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-10 01:02:36.958319 | orchestrator | Tuesday 10 March 2026 01:00:42 +0000 (0:00:00.164) 0:01:22.276 ********* 2026-03-10 01:02:36.958328 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.958337 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.958346 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.958355 | orchestrator | 2026-03-10 01:02:36.958365 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-10 01:02:36.958376 | orchestrator | Tuesday 10 March 2026 01:00:43 +0000 (0:00:00.445) 0:01:22.722 ********* 2026-03-10 01:02:36.958386 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.958395 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:02:36.958405 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:02:36.958414 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-10 01:02:36.958441 | orchestrator | 2026-03-10 01:02:36.958452 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-10 01:02:36.958462 | orchestrator | skipping: no hosts matched 2026-03-10 01:02:36.958472 | orchestrator | 2026-03-10 01:02:36.958482 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-10 01:02:36.958492 | orchestrator | 2026-03-10 01:02:36.958502 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-10 01:02:36.958512 | orchestrator | Tuesday 10 March 2026 01:00:44 +0000 (0:00:00.687) 0:01:23.410 ********* 2026-03-10 01:02:36.958522 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:02:36.958532 | orchestrator | 2026-03-10 01:02:36.958541 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-10 01:02:36.958551 | orchestrator | Tuesday 10 March 2026 01:01:02 +0000 (0:00:18.112) 0:01:41.522 ********* 2026-03-10 01:02:36.958562 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:02:36.958572 | orchestrator | 2026-03-10 01:02:36.958581 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-10 01:02:36.958592 | orchestrator | Tuesday 10 March 2026 01:01:17 +0000 (0:00:15.746) 0:01:57.268 ********* 2026-03-10 01:02:36.958602 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:02:36.958612 | orchestrator | 2026-03-10 01:02:36.958622 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-10 01:02:36.958631 | orchestrator | 2026-03-10 01:02:36.958641 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-10 01:02:36.958650 | orchestrator | Tuesday 10 March 2026 01:01:20 +0000 (0:00:02.760) 0:02:00.029 ********* 2026-03-10 01:02:36.958660 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:02:36.958670 | orchestrator | 2026-03-10 01:02:36.958681 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-10 01:02:36.958691 | orchestrator | Tuesday 10 March 2026 01:01:38 +0000 (0:00:17.915) 0:02:17.944 ********* 2026-03-10 01:02:36.958702 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:02:36.958710 | orchestrator | 2026-03-10 01:02:36.958718 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-10 01:02:36.958726 | orchestrator | Tuesday 10 March 2026 01:01:55 +0000 (0:00:16.559) 0:02:34.504 ********* 2026-03-10 01:02:36.958735 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:02:36.958745 | orchestrator | 2026-03-10 01:02:36.958754 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-10 01:02:36.958763 | orchestrator | 2026-03-10 01:02:36.958778 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-10 01:02:36.958787 | orchestrator | Tuesday 10 March 2026 01:01:57 +0000 (0:00:02.866) 0:02:37.371 ********* 2026-03-10 01:02:36.958797 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.958806 | orchestrator | 2026-03-10 01:02:36.958815 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-10 01:02:36.958830 | orchestrator | Tuesday 10 March 2026 01:02:10 +0000 (0:00:13.019) 0:02:50.391 ********* 2026-03-10 01:02:36.958840 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.958848 | orchestrator | 2026-03-10 01:02:36.958857 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-10 01:02:36.958866 | orchestrator | Tuesday 10 March 2026 01:02:16 +0000 (0:00:05.606) 0:02:55.998 ********* 2026-03-10 01:02:36.958875 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.958884 | orchestrator | 2026-03-10 01:02:36.958893 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-10 01:02:36.958903 | orchestrator | 2026-03-10 01:02:36.958912 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-10 01:02:36.958921 | orchestrator | Tuesday 10 March 2026 01:02:19 +0000 (0:00:03.011) 0:02:59.010 ********* 2026-03-10 01:02:36.958930 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:02:36.958939 | orchestrator | 2026-03-10 01:02:36.958948 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-10 01:02:36.958957 | orchestrator | Tuesday 10 March 2026 01:02:20 +0000 (0:00:00.615) 0:02:59.625 ********* 2026-03-10 01:02:36.958966 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.958975 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.958984 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.958993 | orchestrator | 2026-03-10 01:02:36.959002 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-10 01:02:36.959015 | orchestrator | Tuesday 10 March 2026 01:02:22 +0000 (0:00:02.448) 0:03:02.074 ********* 2026-03-10 01:02:36.959024 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.959032 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.959041 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.959051 | orchestrator | 2026-03-10 01:02:36.959060 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-10 01:02:36.959069 | orchestrator | Tuesday 10 March 2026 01:02:25 +0000 (0:00:02.449) 0:03:04.523 ********* 2026-03-10 01:02:36.959078 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.959087 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.959096 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.959105 | orchestrator | 2026-03-10 01:02:36.959114 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-10 01:02:36.959123 | orchestrator | Tuesday 10 March 2026 01:02:27 +0000 (0:00:02.537) 0:03:07.061 ********* 2026-03-10 01:02:36.959132 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.959141 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.959150 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.959159 | orchestrator | 2026-03-10 01:02:36.959168 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-10 01:02:36.959177 | orchestrator | Tuesday 10 March 2026 01:02:30 +0000 (0:00:02.506) 0:03:09.567 ********* 2026-03-10 01:02:36.959186 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.959196 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:02:36.959205 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:02:36.959214 | orchestrator | 2026-03-10 01:02:36.959223 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-10 01:02:36.959232 | orchestrator | Tuesday 10 March 2026 01:02:34 +0000 (0:00:03.966) 0:03:13.534 ********* 2026-03-10 01:02:36.959241 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.959251 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.959260 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.959269 | orchestrator | 2026-03-10 01:02:36.959278 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:02:36.959287 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-10 01:02:36.959296 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-10 01:02:36.959310 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-10 01:02:36.959320 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-10 01:02:36.959329 | orchestrator | 2026-03-10 01:02:36.959338 | orchestrator | 2026-03-10 01:02:36.959347 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:02:36.959356 | orchestrator | Tuesday 10 March 2026 01:02:34 +0000 (0:00:00.227) 0:03:13.762 ********* 2026-03-10 01:02:36.959365 | orchestrator | =============================================================================== 2026-03-10 01:02:36.959374 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 36.03s 2026-03-10 01:02:36.959383 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 32.31s 2026-03-10 01:02:36.959392 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 13.02s 2026-03-10 01:02:36.959401 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 11.50s 2026-03-10 01:02:36.959410 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.00s 2026-03-10 01:02:36.959419 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 8.40s 2026-03-10 01:02:36.959444 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.63s 2026-03-10 01:02:36.959452 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 5.61s 2026-03-10 01:02:36.959460 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.95s 2026-03-10 01:02:36.959469 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.30s 2026-03-10 01:02:36.959478 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 4.30s 2026-03-10 01:02:36.959487 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 4.28s 2026-03-10 01:02:36.959496 | orchestrator | mariadb : Copying over config.json files for services ------------------- 4.18s 2026-03-10 01:02:36.959505 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.97s 2026-03-10 01:02:36.959514 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.34s 2026-03-10 01:02:36.959522 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 3.05s 2026-03-10 01:02:36.959532 | orchestrator | Check MariaDB service --------------------------------------------------- 3.04s 2026-03-10 01:02:36.959541 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 3.01s 2026-03-10 01:02:36.959550 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.81s 2026-03-10 01:02:36.959559 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.54s 2026-03-10 01:02:36.959568 | orchestrator | 2026-03-10 01:02:36 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:36.959578 | orchestrator | 2026-03-10 01:02:36 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:02:36.964522 | orchestrator | 2026-03-10 01:02:36 | INFO  | Task 4a030905-8bff-4849-ae35-0cf98349a90d is in state SUCCESS 2026-03-10 01:02:36.964555 | orchestrator | 2026-03-10 01:02:36 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:02:36.964564 | orchestrator | 2026-03-10 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:36.967326 | orchestrator | 2026-03-10 01:02:36.967484 | orchestrator | 2026-03-10 01:02:36.967506 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:02:36.967518 | orchestrator | 2026-03-10 01:02:36.967529 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:02:36.967561 | orchestrator | Tuesday 10 March 2026 00:59:21 +0000 (0:00:00.293) 0:00:00.293 ********* 2026-03-10 01:02:36.967572 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.967584 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:02:36.967594 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:02:36.967605 | orchestrator | 2026-03-10 01:02:36.967616 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:02:36.967627 | orchestrator | Tuesday 10 March 2026 00:59:21 +0000 (0:00:00.422) 0:00:00.716 ********* 2026-03-10 01:02:36.967637 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-10 01:02:36.967649 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-10 01:02:36.967659 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-10 01:02:36.967670 | orchestrator | 2026-03-10 01:02:36.967681 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-10 01:02:36.967691 | orchestrator | 2026-03-10 01:02:36.967702 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-10 01:02:36.967713 | orchestrator | Tuesday 10 March 2026 00:59:21 +0000 (0:00:00.535) 0:00:01.251 ********* 2026-03-10 01:02:36.967724 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:02:36.967734 | orchestrator | 2026-03-10 01:02:36.967746 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-10 01:02:36.967757 | orchestrator | Tuesday 10 March 2026 00:59:22 +0000 (0:00:00.560) 0:00:01.811 ********* 2026-03-10 01:02:36.967768 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-10 01:02:36.967779 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-10 01:02:36.967789 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-10 01:02:36.967800 | orchestrator | 2026-03-10 01:02:36.967810 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-10 01:02:36.967821 | orchestrator | Tuesday 10 March 2026 00:59:25 +0000 (0:00:02.776) 0:00:04.588 ********* 2026-03-10 01:02:36.967835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 01:02:36.967931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 01:02:36.967983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 01:02:36.968018 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 01:02:36.968035 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 01:02:36.968051 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 01:02:36.968064 | orchestrator | 2026-03-10 01:02:36.968084 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-10 01:02:36.968097 | orchestrator | Tuesday 10 March 2026 00:59:27 +0000 (0:00:02.450) 0:00:07.038 ********* 2026-03-10 01:02:36.968110 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:02:36.968123 | orchestrator | 2026-03-10 01:02:36.968135 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-10 01:02:36.968147 | orchestrator | Tuesday 10 March 2026 00:59:28 +0000 (0:00:00.910) 0:00:07.949 ********* 2026-03-10 01:02:36.968176 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 01:02:36.968191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 01:02:36.968203 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 01:02:36.968215 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 01:02:36.968250 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 01:02:36.968271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 01:02:36.968305 | orchestrator | 2026-03-10 01:02:36.968324 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-10 01:02:36.968343 | orchestrator | Tuesday 10 March 2026 00:59:32 +0000 (0:00:03.348) 0:00:11.298 ********* 2026-03-10 01:02:36.968363 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 01:02:36.968383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 01:02:36.968415 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.968495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 01:02:36.968530 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 01:02:36.968551 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.968572 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 01:02:36.968593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 01:02:36.968621 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.968633 | orchestrator | 2026-03-10 01:02:36.968644 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-10 01:02:36.968655 | orchestrator | Tuesday 10 March 2026 00:59:33 +0000 (0:00:01.168) 0:00:12.466 ********* 2026-03-10 01:02:36.968671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 01:02:36.968691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 01:02:36.968704 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.968716 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 01:02:36.968728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 01:02:36.968746 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.968757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-10 01:02:36.968781 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-10 01:02:36.968793 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.968804 | orchestrator | 2026-03-10 01:02:36.968815 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-10 01:02:36.968826 | orchestrator | Tuesday 10 March 2026 00:59:34 +0000 (0:00:01.432) 0:00:13.898 ********* 2026-03-10 01:02:36.968838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 01:02:36.968849 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 01:02:36.968867 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 01:02:36.968891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 01:02:36.968904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 01:02:36.968916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 01:02:36.968934 | orchestrator | 2026-03-10 01:02:36.968951 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-10 01:02:36.968999 | orchestrator | Tuesday 10 March 2026 00:59:37 +0000 (0:00:02.737) 0:00:16.636 ********* 2026-03-10 01:02:36.969019 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.969036 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:02:36.969054 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:02:36.969073 | orchestrator | 2026-03-10 01:02:36.969092 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-10 01:02:36.969109 | orchestrator | Tuesday 10 March 2026 00:59:40 +0000 (0:00:02.822) 0:00:19.459 ********* 2026-03-10 01:02:36.969129 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.969147 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:02:36.969165 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:02:36.969184 | orchestrator | 2026-03-10 01:02:36.969203 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-10 01:02:36.969222 | orchestrator | Tuesday 10 March 2026 00:59:42 +0000 (0:00:02.340) 0:00:21.799 ********* 2026-03-10 01:02:36.969250 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 01:02:36.969283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 01:02:36.969302 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-10 01:02:36.969328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 01:02:36.969340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 01:02:36.969365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-10 01:02:36.969378 | orchestrator | 2026-03-10 01:02:36.969389 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-10 01:02:36.969400 | orchestrator | Tuesday 10 March 2026 00:59:44 +0000 (0:00:01.875) 0:00:23.674 ********* 2026-03-10 01:02:36.969411 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.969422 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:02:36.969467 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:02:36.969478 | orchestrator | 2026-03-10 01:02:36.969489 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-10 01:02:36.969500 | orchestrator | Tuesday 10 March 2026 00:59:44 +0000 (0:00:00.344) 0:00:24.019 ********* 2026-03-10 01:02:36.969518 | orchestrator | 2026-03-10 01:02:36.969529 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-10 01:02:36.969540 | orchestrator | Tuesday 10 March 2026 00:59:44 +0000 (0:00:00.077) 0:00:24.097 ********* 2026-03-10 01:02:36.969551 | orchestrator | 2026-03-10 01:02:36.969562 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-10 01:02:36.969573 | orchestrator | Tuesday 10 March 2026 00:59:44 +0000 (0:00:00.070) 0:00:24.167 ********* 2026-03-10 01:02:36.969583 | orchestrator | 2026-03-10 01:02:36.969594 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-10 01:02:36.969605 | orchestrator | Tuesday 10 March 2026 00:59:44 +0000 (0:00:00.079) 0:00:24.247 ********* 2026-03-10 01:02:36.969615 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.969626 | orchestrator | 2026-03-10 01:02:36.969637 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-10 01:02:36.969648 | orchestrator | Tuesday 10 March 2026 00:59:45 +0000 (0:00:00.793) 0:00:25.040 ********* 2026-03-10 01:02:36.969658 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:02:36.969669 | orchestrator | 2026-03-10 01:02:36.969680 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-10 01:02:36.969691 | orchestrator | Tuesday 10 March 2026 00:59:45 +0000 (0:00:00.207) 0:00:25.248 ********* 2026-03-10 01:02:36.969702 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.969712 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:02:36.969723 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:02:36.969734 | orchestrator | 2026-03-10 01:02:36.969745 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-10 01:02:36.969756 | orchestrator | Tuesday 10 March 2026 01:00:58 +0000 (0:01:12.421) 0:01:37.669 ********* 2026-03-10 01:02:36.969767 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.969778 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:02:36.969788 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:02:36.969799 | orchestrator | 2026-03-10 01:02:36.969810 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-10 01:02:36.969821 | orchestrator | Tuesday 10 March 2026 01:02:21 +0000 (0:01:23.143) 0:03:00.813 ********* 2026-03-10 01:02:36.969831 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:02:36.969842 | orchestrator | 2026-03-10 01:02:36.969853 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-10 01:02:36.969864 | orchestrator | Tuesday 10 March 2026 01:02:22 +0000 (0:00:00.743) 0:03:01.556 ********* 2026-03-10 01:02:36.969875 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.969886 | orchestrator | 2026-03-10 01:02:36.969897 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-10 01:02:36.969908 | orchestrator | Tuesday 10 March 2026 01:02:24 +0000 (0:00:02.677) 0:03:04.233 ********* 2026-03-10 01:02:36.969918 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.969929 | orchestrator | 2026-03-10 01:02:36.969940 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-10 01:02:36.969951 | orchestrator | Tuesday 10 March 2026 01:02:27 +0000 (0:00:02.752) 0:03:06.985 ********* 2026-03-10 01:02:36.969962 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:02:36.969972 | orchestrator | 2026-03-10 01:02:36.969983 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-10 01:02:36.969994 | orchestrator | Tuesday 10 March 2026 01:02:30 +0000 (0:00:02.660) 0:03:09.646 ********* 2026-03-10 01:02:36.970005 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.970061 | orchestrator | 2026-03-10 01:02:36.970076 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-10 01:02:36.970087 | orchestrator | Tuesday 10 March 2026 01:02:33 +0000 (0:00:02.989) 0:03:12.636 ********* 2026-03-10 01:02:36.970098 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:02:36.970109 | orchestrator | 2026-03-10 01:02:36.970120 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:02:36.970144 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 01:02:36.970156 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 01:02:36.970175 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-10 01:02:36.970187 | orchestrator | 2026-03-10 01:02:36.970198 | orchestrator | 2026-03-10 01:02:36.970209 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:02:36.970219 | orchestrator | Tuesday 10 March 2026 01:02:36 +0000 (0:00:02.752) 0:03:15.388 ********* 2026-03-10 01:02:36.970231 | orchestrator | =============================================================================== 2026-03-10 01:02:36.970241 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 83.14s 2026-03-10 01:02:36.970252 | orchestrator | opensearch : Restart opensearch container ------------------------------ 72.42s 2026-03-10 01:02:36.970263 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.35s 2026-03-10 01:02:36.970274 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.99s 2026-03-10 01:02:36.970285 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.82s 2026-03-10 01:02:36.970296 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 2.78s 2026-03-10 01:02:36.970307 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.75s 2026-03-10 01:02:36.970318 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.75s 2026-03-10 01:02:36.970329 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.74s 2026-03-10 01:02:36.970340 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.68s 2026-03-10 01:02:36.970350 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.66s 2026-03-10 01:02:36.970362 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 2.45s 2026-03-10 01:02:36.970372 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.34s 2026-03-10 01:02:36.970384 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.88s 2026-03-10 01:02:36.970403 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.43s 2026-03-10 01:02:36.970525 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.17s 2026-03-10 01:02:36.970555 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.91s 2026-03-10 01:02:36.970573 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.79s 2026-03-10 01:02:36.970591 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.74s 2026-03-10 01:02:36.970607 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.56s 2026-03-10 01:02:40.008853 | orchestrator | 2026-03-10 01:02:40 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:40.012399 | orchestrator | 2026-03-10 01:02:40 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:02:40.013663 | orchestrator | 2026-03-10 01:02:40 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:02:40.013946 | orchestrator | 2026-03-10 01:02:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:43.059935 | orchestrator | 2026-03-10 01:02:43 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:43.062395 | orchestrator | 2026-03-10 01:02:43 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:02:43.065304 | orchestrator | 2026-03-10 01:02:43 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:02:43.065374 | orchestrator | 2026-03-10 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:46.109294 | orchestrator | 2026-03-10 01:02:46 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:46.109768 | orchestrator | 2026-03-10 01:02:46 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:02:46.111507 | orchestrator | 2026-03-10 01:02:46 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:02:46.111545 | orchestrator | 2026-03-10 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:49.164683 | orchestrator | 2026-03-10 01:02:49 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:49.165723 | orchestrator | 2026-03-10 01:02:49 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:02:49.167322 | orchestrator | 2026-03-10 01:02:49 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:02:49.167467 | orchestrator | 2026-03-10 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:52.204695 | orchestrator | 2026-03-10 01:02:52 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:52.206919 | orchestrator | 2026-03-10 01:02:52 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:02:52.209449 | orchestrator | 2026-03-10 01:02:52 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:02:52.210744 | orchestrator | 2026-03-10 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:55.265258 | orchestrator | 2026-03-10 01:02:55 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:55.266168 | orchestrator | 2026-03-10 01:02:55 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:02:55.267514 | orchestrator | 2026-03-10 01:02:55 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:02:55.267567 | orchestrator | 2026-03-10 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:02:58.304671 | orchestrator | 2026-03-10 01:02:58 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:02:58.306740 | orchestrator | 2026-03-10 01:02:58 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:02:58.306774 | orchestrator | 2026-03-10 01:02:58 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:02:58.306784 | orchestrator | 2026-03-10 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:01.343626 | orchestrator | 2026-03-10 01:03:01 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:01.345235 | orchestrator | 2026-03-10 01:03:01 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:01.347581 | orchestrator | 2026-03-10 01:03:01 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:01.347875 | orchestrator | 2026-03-10 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:04.388916 | orchestrator | 2026-03-10 01:03:04 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:04.392151 | orchestrator | 2026-03-10 01:03:04 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:04.394469 | orchestrator | 2026-03-10 01:03:04 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:04.394617 | orchestrator | 2026-03-10 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:07.446374 | orchestrator | 2026-03-10 01:03:07 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:07.446519 | orchestrator | 2026-03-10 01:03:07 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:07.446541 | orchestrator | 2026-03-10 01:03:07 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:07.446560 | orchestrator | 2026-03-10 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:10.485531 | orchestrator | 2026-03-10 01:03:10 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:10.485810 | orchestrator | 2026-03-10 01:03:10 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:10.487067 | orchestrator | 2026-03-10 01:03:10 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:10.487120 | orchestrator | 2026-03-10 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:13.529319 | orchestrator | 2026-03-10 01:03:13 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:13.531625 | orchestrator | 2026-03-10 01:03:13 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:13.533230 | orchestrator | 2026-03-10 01:03:13 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:13.533281 | orchestrator | 2026-03-10 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:16.577969 | orchestrator | 2026-03-10 01:03:16 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:16.581139 | orchestrator | 2026-03-10 01:03:16 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:16.583705 | orchestrator | 2026-03-10 01:03:16 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:16.583958 | orchestrator | 2026-03-10 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:19.625557 | orchestrator | 2026-03-10 01:03:19 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:19.626205 | orchestrator | 2026-03-10 01:03:19 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:19.627707 | orchestrator | 2026-03-10 01:03:19 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:19.627741 | orchestrator | 2026-03-10 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:22.670380 | orchestrator | 2026-03-10 01:03:22 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:22.670551 | orchestrator | 2026-03-10 01:03:22 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:22.671722 | orchestrator | 2026-03-10 01:03:22 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:22.671773 | orchestrator | 2026-03-10 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:25.710757 | orchestrator | 2026-03-10 01:03:25 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:25.711817 | orchestrator | 2026-03-10 01:03:25 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:25.714151 | orchestrator | 2026-03-10 01:03:25 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:25.714232 | orchestrator | 2026-03-10 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:28.755986 | orchestrator | 2026-03-10 01:03:28 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:28.758560 | orchestrator | 2026-03-10 01:03:28 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:28.759850 | orchestrator | 2026-03-10 01:03:28 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:28.759887 | orchestrator | 2026-03-10 01:03:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:31.819240 | orchestrator | 2026-03-10 01:03:31 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:31.821281 | orchestrator | 2026-03-10 01:03:31 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:31.831808 | orchestrator | 2026-03-10 01:03:31 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:31.831888 | orchestrator | 2026-03-10 01:03:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:34.879159 | orchestrator | 2026-03-10 01:03:34 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:34.881008 | orchestrator | 2026-03-10 01:03:34 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:34.882491 | orchestrator | 2026-03-10 01:03:34 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:34.882545 | orchestrator | 2026-03-10 01:03:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:37.955888 | orchestrator | 2026-03-10 01:03:37 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:37.960966 | orchestrator | 2026-03-10 01:03:37 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:37.963207 | orchestrator | 2026-03-10 01:03:37 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:37.963382 | orchestrator | 2026-03-10 01:03:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:41.017638 | orchestrator | 2026-03-10 01:03:41 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:41.020745 | orchestrator | 2026-03-10 01:03:41 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:41.022207 | orchestrator | 2026-03-10 01:03:41 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:41.022303 | orchestrator | 2026-03-10 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:44.071553 | orchestrator | 2026-03-10 01:03:44 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:44.073644 | orchestrator | 2026-03-10 01:03:44 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:44.077762 | orchestrator | 2026-03-10 01:03:44 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:44.077848 | orchestrator | 2026-03-10 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:47.124809 | orchestrator | 2026-03-10 01:03:47 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state STARTED 2026-03-10 01:03:47.125976 | orchestrator | 2026-03-10 01:03:47 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:47.127508 | orchestrator | 2026-03-10 01:03:47 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:47.129705 | orchestrator | 2026-03-10 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:50.190286 | orchestrator | 2026-03-10 01:03:50 | INFO  | Task abc0b55b-7534-402c-8870-e2ab8ac318a2 is in state SUCCESS 2026-03-10 01:03:50.191258 | orchestrator | 2026-03-10 01:03:50.191324 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-10 01:03:50.191337 | orchestrator | 2.16.14 2026-03-10 01:03:50.191348 | orchestrator | 2026-03-10 01:03:50.191359 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-10 01:03:50.191369 | orchestrator | 2026-03-10 01:03:50.191379 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-10 01:03:50.191388 | orchestrator | Tuesday 10 March 2026 01:01:33 +0000 (0:00:00.569) 0:00:00.569 ********* 2026-03-10 01:03:50.191444 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:03:50.191528 | orchestrator | 2026-03-10 01:03:50.191539 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-10 01:03:50.191548 | orchestrator | Tuesday 10 March 2026 01:01:34 +0000 (0:00:00.569) 0:00:01.139 ********* 2026-03-10 01:03:50.191557 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.191567 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.191575 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.191584 | orchestrator | 2026-03-10 01:03:50.191593 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-10 01:03:50.191601 | orchestrator | Tuesday 10 March 2026 01:01:34 +0000 (0:00:00.593) 0:00:01.733 ********* 2026-03-10 01:03:50.191609 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.191618 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.191628 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.191637 | orchestrator | 2026-03-10 01:03:50.191646 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-10 01:03:50.191656 | orchestrator | Tuesday 10 March 2026 01:01:35 +0000 (0:00:00.312) 0:00:02.046 ********* 2026-03-10 01:03:50.191665 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.191674 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.191683 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.191691 | orchestrator | 2026-03-10 01:03:50.191700 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-10 01:03:50.191709 | orchestrator | Tuesday 10 March 2026 01:01:35 +0000 (0:00:00.876) 0:00:02.922 ********* 2026-03-10 01:03:50.191718 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.191727 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.191736 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.191745 | orchestrator | 2026-03-10 01:03:50.191754 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-10 01:03:50.191763 | orchestrator | Tuesday 10 March 2026 01:01:36 +0000 (0:00:00.352) 0:00:03.274 ********* 2026-03-10 01:03:50.191771 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.191780 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.191790 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.191800 | orchestrator | 2026-03-10 01:03:50.191808 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-10 01:03:50.191816 | orchestrator | Tuesday 10 March 2026 01:01:36 +0000 (0:00:00.335) 0:00:03.610 ********* 2026-03-10 01:03:50.192252 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.192279 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.192289 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.192299 | orchestrator | 2026-03-10 01:03:50.192309 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-10 01:03:50.192320 | orchestrator | Tuesday 10 March 2026 01:01:36 +0000 (0:00:00.341) 0:00:03.951 ********* 2026-03-10 01:03:50.192330 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.192341 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.192351 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.192361 | orchestrator | 2026-03-10 01:03:50.192371 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-10 01:03:50.192380 | orchestrator | Tuesday 10 March 2026 01:01:37 +0000 (0:00:00.561) 0:00:04.512 ********* 2026-03-10 01:03:50.192431 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.192442 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.192452 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.192461 | orchestrator | 2026-03-10 01:03:50.192470 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-10 01:03:50.192584 | orchestrator | Tuesday 10 March 2026 01:01:37 +0000 (0:00:00.307) 0:00:04.819 ********* 2026-03-10 01:03:50.192595 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 01:03:50.192605 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:03:50.192613 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:03:50.192694 | orchestrator | 2026-03-10 01:03:50.192707 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-10 01:03:50.192715 | orchestrator | Tuesday 10 March 2026 01:01:38 +0000 (0:00:00.687) 0:00:05.507 ********* 2026-03-10 01:03:50.192726 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.192734 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.192742 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.192750 | orchestrator | 2026-03-10 01:03:50.192758 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-10 01:03:50.192766 | orchestrator | Tuesday 10 March 2026 01:01:39 +0000 (0:00:00.485) 0:00:05.992 ********* 2026-03-10 01:03:50.192789 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 01:03:50.192987 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:03:50.193008 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:03:50.193018 | orchestrator | 2026-03-10 01:03:50.193028 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-10 01:03:50.193044 | orchestrator | Tuesday 10 March 2026 01:01:41 +0000 (0:00:02.187) 0:00:08.180 ********* 2026-03-10 01:03:50.193054 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-10 01:03:50.193065 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-10 01:03:50.193074 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-10 01:03:50.193083 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.193091 | orchestrator | 2026-03-10 01:03:50.193138 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-10 01:03:50.193151 | orchestrator | Tuesday 10 March 2026 01:01:41 +0000 (0:00:00.692) 0:00:08.873 ********* 2026-03-10 01:03:50.193164 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.193179 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.193190 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.193201 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.193212 | orchestrator | 2026-03-10 01:03:50.193220 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-10 01:03:50.193229 | orchestrator | Tuesday 10 March 2026 01:01:42 +0000 (0:00:00.928) 0:00:09.801 ********* 2026-03-10 01:03:50.193240 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.193267 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.193278 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.193287 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.193297 | orchestrator | 2026-03-10 01:03:50.193306 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-10 01:03:50.193315 | orchestrator | Tuesday 10 March 2026 01:01:43 +0000 (0:00:00.425) 0:00:10.226 ********* 2026-03-10 01:03:50.193327 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'e59693f4a7ad', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-10 01:01:39.699211', 'end': '2026-03-10 01:01:39.743877', 'delta': '0:00:00.044666', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['e59693f4a7ad'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-10 01:03:50.193349 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '06fa301e478a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-10 01:01:40.499195', 'end': '2026-03-10 01:01:40.546880', 'delta': '0:00:00.047685', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06fa301e478a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-10 01:03:50.193391 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '3ab8bd98a69b', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-10 01:01:41.046592', 'end': '2026-03-10 01:01:41.084078', 'delta': '0:00:00.037486', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3ab8bd98a69b'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-10 01:03:50.193462 | orchestrator | 2026-03-10 01:03:50.193472 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-10 01:03:50.193480 | orchestrator | Tuesday 10 March 2026 01:01:43 +0000 (0:00:00.213) 0:00:10.440 ********* 2026-03-10 01:03:50.193488 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.193506 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.193516 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.193523 | orchestrator | 2026-03-10 01:03:50.193531 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-10 01:03:50.193539 | orchestrator | Tuesday 10 March 2026 01:01:43 +0000 (0:00:00.467) 0:00:10.908 ********* 2026-03-10 01:03:50.193547 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-10 01:03:50.193556 | orchestrator | 2026-03-10 01:03:50.193565 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-10 01:03:50.193573 | orchestrator | Tuesday 10 March 2026 01:01:45 +0000 (0:00:01.707) 0:00:12.615 ********* 2026-03-10 01:03:50.193581 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.193589 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.193596 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.193605 | orchestrator | 2026-03-10 01:03:50.193613 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-10 01:03:50.193621 | orchestrator | Tuesday 10 March 2026 01:01:45 +0000 (0:00:00.331) 0:00:12.946 ********* 2026-03-10 01:03:50.193629 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.193637 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.193645 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.193654 | orchestrator | 2026-03-10 01:03:50.193663 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-10 01:03:50.193671 | orchestrator | Tuesday 10 March 2026 01:01:46 +0000 (0:00:00.489) 0:00:13.436 ********* 2026-03-10 01:03:50.193680 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.193689 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.193698 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.193707 | orchestrator | 2026-03-10 01:03:50.193716 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-10 01:03:50.193724 | orchestrator | Tuesday 10 March 2026 01:01:47 +0000 (0:00:00.565) 0:00:14.002 ********* 2026-03-10 01:03:50.193732 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.193739 | orchestrator | 2026-03-10 01:03:50.193746 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-10 01:03:50.193754 | orchestrator | Tuesday 10 March 2026 01:01:47 +0000 (0:00:00.137) 0:00:14.139 ********* 2026-03-10 01:03:50.193762 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.193770 | orchestrator | 2026-03-10 01:03:50.193778 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-10 01:03:50.193787 | orchestrator | Tuesday 10 March 2026 01:01:47 +0000 (0:00:00.284) 0:00:14.423 ********* 2026-03-10 01:03:50.193793 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.193797 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.193802 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.193807 | orchestrator | 2026-03-10 01:03:50.193812 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-10 01:03:50.193816 | orchestrator | Tuesday 10 March 2026 01:01:47 +0000 (0:00:00.313) 0:00:14.737 ********* 2026-03-10 01:03:50.193822 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.193826 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.193831 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.193836 | orchestrator | 2026-03-10 01:03:50.193841 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-10 01:03:50.193846 | orchestrator | Tuesday 10 March 2026 01:01:48 +0000 (0:00:00.336) 0:00:15.074 ********* 2026-03-10 01:03:50.193850 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.193855 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.193860 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.193865 | orchestrator | 2026-03-10 01:03:50.193870 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-10 01:03:50.193875 | orchestrator | Tuesday 10 March 2026 01:01:48 +0000 (0:00:00.567) 0:00:15.641 ********* 2026-03-10 01:03:50.193888 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.193898 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.193904 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.193909 | orchestrator | 2026-03-10 01:03:50.193913 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-10 01:03:50.193918 | orchestrator | Tuesday 10 March 2026 01:01:49 +0000 (0:00:00.345) 0:00:15.987 ********* 2026-03-10 01:03:50.193923 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.193928 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.193933 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.193938 | orchestrator | 2026-03-10 01:03:50.193943 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-10 01:03:50.193948 | orchestrator | Tuesday 10 March 2026 01:01:49 +0000 (0:00:00.358) 0:00:16.345 ********* 2026-03-10 01:03:50.193953 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.193957 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.193962 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.193996 | orchestrator | 2026-03-10 01:03:50.194002 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-10 01:03:50.194007 | orchestrator | Tuesday 10 March 2026 01:01:49 +0000 (0:00:00.342) 0:00:16.687 ********* 2026-03-10 01:03:50.194052 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.194126 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.194133 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.194138 | orchestrator | 2026-03-10 01:03:50.194142 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-10 01:03:50.194147 | orchestrator | Tuesday 10 March 2026 01:01:50 +0000 (0:00:00.592) 0:00:17.279 ********* 2026-03-10 01:03:50.194154 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df-osd--block--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df', 'dm-uuid-LVM-fg8D7lPuLf2SnuohSesegra2TySSTgsXKLUHoRdmUx1vIjgJIQf595TyFYvkACQi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194161 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e5abf04--63a5--5f41--bb2b--61caa92fdc91-osd--block--1e5abf04--63a5--5f41--bb2b--61caa92fdc91', 'dm-uuid-LVM-0LrrmFudB3mDRYcDT7ZzcT6hmoO3AZ7qv3BPRWdnHLmIEehPbOPsUUkqz5NluNBY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194167 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194173 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194179 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194200 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194232 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e-osd--block--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e', 'dm-uuid-LVM-crZZNUYAkiNGTnZUimsr43acDHrET7dTYRUkxsOmreHd8425IdJBYjuVWBlXVoKJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194238 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a55caf6--84ae--542a--a466--02d3e6c6095e-osd--block--5a55caf6--84ae--542a--a466--02d3e6c6095e', 'dm-uuid-LVM-ww10THr3vAWs6YC2YLliCJBNkkdUNVlsx91VZ2PSKLbQmEw8FVxjqCv8vfg6Vd3v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194244 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194249 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194254 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194263 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194276 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194302 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part1', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part14', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part15', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part16', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194310 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194320 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194334 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df-osd--block--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tGGJVI-Kjsh-UAJd-ru60-SoR8-9teX-TdvgcC', 'scsi-0QEMU_QEMU_HARDDISK_8f76f090-a1e0-42c3-8072-1f51d4df9a8c', 'scsi-SQEMU_QEMU_HARDDISK_8f76f090-a1e0-42c3-8072-1f51d4df9a8c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194341 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194362 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1e5abf04--63a5--5f41--bb2b--61caa92fdc91-osd--block--1e5abf04--63a5--5f41--bb2b--61caa92fdc91'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yua8fM-XwQN-51jl-eNOV-Qqrh-Xeao-CP3M9d', 'scsi-0QEMU_QEMU_HARDDISK_e4712c11-e6a0-4829-954c-3e21e73d266a', 'scsi-SQEMU_QEMU_HARDDISK_e4712c11-e6a0-4829-954c-3e21e73d266a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194369 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b638158-044f-4e2c-a80d-2256f7b00733', 'scsi-SQEMU_QEMU_HARDDISK_5b638158-044f-4e2c-a80d-2256f7b00733'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194454 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part1', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part14', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part15', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part16', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194464 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.194470 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e-osd--block--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sWpfpg-U2qw-FqGq-Mxi9-RNNI-Wgzt-S0TXXF', 'scsi-0QEMU_QEMU_HARDDISK_b94fdc5f-2b9b-46a8-a60f-74e41f269a0d', 'scsi-SQEMU_QEMU_HARDDISK_b94fdc5f-2b9b-46a8-a60f-74e41f269a0d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194476 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--5a55caf6--84ae--542a--a466--02d3e6c6095e-osd--block--5a55caf6--84ae--542a--a466--02d3e6c6095e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jWTw2W-lrwH-PHTk-lRyE-lPFy-XJdm-7p63ov', 'scsi-0QEMU_QEMU_HARDDISK_32f512e5-1c04-4680-91d7-4268581c2350', 'scsi-SQEMU_QEMU_HARDDISK_32f512e5-1c04-4680-91d7-4268581c2350'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194481 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21ab9d1e-083b-4748-865b-4e7341aec385', 'scsi-SQEMU_QEMU_HARDDISK_21ab9d1e-083b-4748-865b-4e7341aec385'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194490 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--276dc5cf--0fff--57f4--b280--c3cda8556bee-osd--block--276dc5cf--0fff--57f4--b280--c3cda8556bee', 'dm-uuid-LVM-yi1gXmNOndbMseZbmXZIlMtCjradzf0QOPnrVXCTWCMoVR6dlw68AbG7U9XJCe9Q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194498 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1c4f45a1--f837--5281--b6b5--75662d68eedd-osd--block--1c4f45a1--f837--5281--b6b5--75662d68eedd', 'dm-uuid-LVM-JYeXdpyT69xd4mJwK8fftq9TFlsAtIjzxPozNSH0AeW9ePThwtiJHfCbXkcYanKl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194520 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.194525 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194535 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194549 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194554 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-10 01:03:50.194574 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part1', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part14', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part15', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part16', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--276dc5cf--0fff--57f4--b280--c3cda8556bee-osd--block--276dc5cf--0fff--57f4--b280--c3cda8556bee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P8ZAa1-SZlc-hZ12-Pgh0-jOFD-cm7l-qlcnR0', 'scsi-0QEMU_QEMU_HARDDISK_525599b5-6362-4aac-a0b3-94bd4cb39972', 'scsi-SQEMU_QEMU_HARDDISK_525599b5-6362-4aac-a0b3-94bd4cb39972'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194590 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--1c4f45a1--f837--5281--b6b5--75662d68eedd-osd--block--1c4f45a1--f837--5281--b6b5--75662d68eedd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Buht1o-r91x-2hlE-A6fu-XTGr-iGdr-0E5mC7', 'scsi-0QEMU_QEMU_HARDDISK_885a647d-e739-4ea9-ae01-9c2ce04d6822', 'scsi-SQEMU_QEMU_HARDDISK_885a647d-e739-4ea9-ae01-9c2ce04d6822'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e39970d-8644-42a9-a13b-932f32b0237f', 'scsi-SQEMU_QEMU_HARDDISK_3e39970d-8644-42a9-a13b-932f32b0237f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194607 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-10 01:03:50.194612 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.194617 | orchestrator | 2026-03-10 01:03:50.194622 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-10 01:03:50.194627 | orchestrator | Tuesday 10 March 2026 01:01:50 +0000 (0:00:00.610) 0:00:17.890 ********* 2026-03-10 01:03:50.194633 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df-osd--block--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df', 'dm-uuid-LVM-fg8D7lPuLf2SnuohSesegra2TySSTgsXKLUHoRdmUx1vIjgJIQf595TyFYvkACQi'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194643 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1e5abf04--63a5--5f41--bb2b--61caa92fdc91-osd--block--1e5abf04--63a5--5f41--bb2b--61caa92fdc91', 'dm-uuid-LVM-0LrrmFudB3mDRYcDT7ZzcT6hmoO3AZ7qv3BPRWdnHLmIEehPbOPsUUkqz5NluNBY'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194648 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194653 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194662 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194677 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194685 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194694 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194707 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e-osd--block--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e', 'dm-uuid-LVM-crZZNUYAkiNGTnZUimsr43acDHrET7dTYRUkxsOmreHd8425IdJBYjuVWBlXVoKJ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194715 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194727 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--5a55caf6--84ae--542a--a466--02d3e6c6095e-osd--block--5a55caf6--84ae--542a--a466--02d3e6c6095e', 'dm-uuid-LVM-ww10THr3vAWs6YC2YLliCJBNkkdUNVlsx91VZ2PSKLbQmEw8FVxjqCv8vfg6Vd3v'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194750 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194764 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part1', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part14', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part15', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part16', 'scsi-SQEMU_QEMU_HARDDISK_71037f65-dbb1-4725-897c-91d536174aba-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194778 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194795 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df-osd--block--c2da093f--67f0--5a54--a6a1--4e0ffcdb14df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tGGJVI-Kjsh-UAJd-ru60-SoR8-9teX-TdvgcC', 'scsi-0QEMU_QEMU_HARDDISK_8f76f090-a1e0-42c3-8072-1f51d4df9a8c', 'scsi-SQEMU_QEMU_HARDDISK_8f76f090-a1e0-42c3-8072-1f51d4df9a8c'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194805 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194822 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1e5abf04--63a5--5f41--bb2b--61caa92fdc91-osd--block--1e5abf04--63a5--5f41--bb2b--61caa92fdc91'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-yua8fM-XwQN-51jl-eNOV-Qqrh-Xeao-CP3M9d', 'scsi-0QEMU_QEMU_HARDDISK_e4712c11-e6a0-4829-954c-3e21e73d266a', 'scsi-SQEMU_QEMU_HARDDISK_e4712c11-e6a0-4829-954c-3e21e73d266a'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194829 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194839 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_5b638158-044f-4e2c-a80d-2256f7b00733', 'scsi-SQEMU_QEMU_HARDDISK_5b638158-044f-4e2c-a80d-2256f7b00733'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194849 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194856 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-28-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194866 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.194872 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194878 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194884 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194898 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part1', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part14', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part15', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part16', 'scsi-SQEMU_QEMU_HARDDISK_b98669d5-adca-4914-bd0a-18edeba10c2d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194920 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e-osd--block--c7cdfd74--cae8--56d1--a0f9--4438e0fe684e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-sWpfpg-U2qw-FqGq-Mxi9-RNNI-Wgzt-S0TXXF', 'scsi-0QEMU_QEMU_HARDDISK_b94fdc5f-2b9b-46a8-a60f-74e41f269a0d', 'scsi-SQEMU_QEMU_HARDDISK_b94fdc5f-2b9b-46a8-a60f-74e41f269a0d'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194926 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--276dc5cf--0fff--57f4--b280--c3cda8556bee-osd--block--276dc5cf--0fff--57f4--b280--c3cda8556bee', 'dm-uuid-LVM-yi1gXmNOndbMseZbmXZIlMtCjradzf0QOPnrVXCTWCMoVR6dlw68AbG7U9XJCe9Q'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194937 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--5a55caf6--84ae--542a--a466--02d3e6c6095e-osd--block--5a55caf6--84ae--542a--a466--02d3e6c6095e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-jWTw2W-lrwH-PHTk-lRyE-lPFy-XJdm-7p63ov', 'scsi-0QEMU_QEMU_HARDDISK_32f512e5-1c04-4680-91d7-4268581c2350', 'scsi-SQEMU_QEMU_HARDDISK_32f512e5-1c04-4680-91d7-4268581c2350'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194953 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--1c4f45a1--f837--5281--b6b5--75662d68eedd-osd--block--1c4f45a1--f837--5281--b6b5--75662d68eedd', 'dm-uuid-LVM-JYeXdpyT69xd4mJwK8fftq9TFlsAtIjzxPozNSH0AeW9ePThwtiJHfCbXkcYanKl'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194967 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_21ab9d1e-083b-4748-865b-4e7341aec385', 'scsi-SQEMU_QEMU_HARDDISK_21ab9d1e-083b-4748-865b-4e7341aec385'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194976 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194984 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-26-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.194992 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.195000 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.195011 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.195026 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.195040 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.195048 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.195056 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.195065 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.195083 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part1', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part14', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part15', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part16', 'scsi-SQEMU_QEMU_HARDDISK_b0fbedb1-1079-4b81-9d18-c7f1d1a1550b-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.195099 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--276dc5cf--0fff--57f4--b280--c3cda8556bee-osd--block--276dc5cf--0fff--57f4--b280--c3cda8556bee'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-P8ZAa1-SZlc-hZ12-Pgh0-jOFD-cm7l-qlcnR0', 'scsi-0QEMU_QEMU_HARDDISK_525599b5-6362-4aac-a0b3-94bd4cb39972', 'scsi-SQEMU_QEMU_HARDDISK_525599b5-6362-4aac-a0b3-94bd4cb39972'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.195108 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--1c4f45a1--f837--5281--b6b5--75662d68eedd-osd--block--1c4f45a1--f837--5281--b6b5--75662d68eedd'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-Buht1o-r91x-2hlE-A6fu-XTGr-iGdr-0E5mC7', 'scsi-0QEMU_QEMU_HARDDISK_885a647d-e739-4ea9-ae01-9c2ce04d6822', 'scsi-SQEMU_QEMU_HARDDISK_885a647d-e739-4ea9-ae01-9c2ce04d6822'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.195121 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e39970d-8644-42a9-a13b-932f32b0237f', 'scsi-SQEMU_QEMU_HARDDISK_3e39970d-8644-42a9-a13b-932f32b0237f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.195136 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-10-00-03-34-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-10 01:03:50.195151 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.195160 | orchestrator | 2026-03-10 01:03:50.195169 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-10 01:03:50.195177 | orchestrator | Tuesday 10 March 2026 01:01:51 +0000 (0:00:00.661) 0:00:18.552 ********* 2026-03-10 01:03:50.195186 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.195194 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.195202 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.195210 | orchestrator | 2026-03-10 01:03:50.195218 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-10 01:03:50.195226 | orchestrator | Tuesday 10 March 2026 01:01:52 +0000 (0:00:00.711) 0:00:19.263 ********* 2026-03-10 01:03:50.195235 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.195243 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.195250 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.195259 | orchestrator | 2026-03-10 01:03:50.195268 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-10 01:03:50.195276 | orchestrator | Tuesday 10 March 2026 01:01:52 +0000 (0:00:00.535) 0:00:19.798 ********* 2026-03-10 01:03:50.195285 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.195293 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.195301 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.195310 | orchestrator | 2026-03-10 01:03:50.195318 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-10 01:03:50.195326 | orchestrator | Tuesday 10 March 2026 01:01:53 +0000 (0:00:00.640) 0:00:20.439 ********* 2026-03-10 01:03:50.195334 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.195343 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.195351 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.195358 | orchestrator | 2026-03-10 01:03:50.195366 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-10 01:03:50.195374 | orchestrator | Tuesday 10 March 2026 01:01:53 +0000 (0:00:00.312) 0:00:20.751 ********* 2026-03-10 01:03:50.195382 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.195391 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.195425 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.195435 | orchestrator | 2026-03-10 01:03:50.195443 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-10 01:03:50.195450 | orchestrator | Tuesday 10 March 2026 01:01:54 +0000 (0:00:00.445) 0:00:21.197 ********* 2026-03-10 01:03:50.195458 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.195466 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.195474 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.195482 | orchestrator | 2026-03-10 01:03:50.195490 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-10 01:03:50.195621 | orchestrator | Tuesday 10 March 2026 01:01:54 +0000 (0:00:00.573) 0:00:21.770 ********* 2026-03-10 01:03:50.195632 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-10 01:03:50.195640 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-10 01:03:50.195648 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-10 01:03:50.195656 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-10 01:03:50.195664 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-10 01:03:50.195672 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-10 01:03:50.195680 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-10 01:03:50.195688 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-10 01:03:50.195706 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-10 01:03:50.195714 | orchestrator | 2026-03-10 01:03:50.195722 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-10 01:03:50.195731 | orchestrator | Tuesday 10 March 2026 01:01:55 +0000 (0:00:01.005) 0:00:22.776 ********* 2026-03-10 01:03:50.195740 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-10 01:03:50.195748 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-10 01:03:50.195756 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-10 01:03:50.195765 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.195773 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-10 01:03:50.195782 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-10 01:03:50.195790 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-10 01:03:50.195798 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.195807 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-10 01:03:50.195815 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-10 01:03:50.195828 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-10 01:03:50.195837 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.195845 | orchestrator | 2026-03-10 01:03:50.195853 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-10 01:03:50.195862 | orchestrator | Tuesday 10 March 2026 01:01:56 +0000 (0:00:00.415) 0:00:23.191 ********* 2026-03-10 01:03:50.195870 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:03:50.195878 | orchestrator | 2026-03-10 01:03:50.195887 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-10 01:03:50.195896 | orchestrator | Tuesday 10 March 2026 01:01:56 +0000 (0:00:00.762) 0:00:23.954 ********* 2026-03-10 01:03:50.195913 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.195922 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.195930 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.195936 | orchestrator | 2026-03-10 01:03:50.195941 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-10 01:03:50.195946 | orchestrator | Tuesday 10 March 2026 01:01:57 +0000 (0:00:00.473) 0:00:24.427 ********* 2026-03-10 01:03:50.195951 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.195956 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.195960 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.195965 | orchestrator | 2026-03-10 01:03:50.195970 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-10 01:03:50.195975 | orchestrator | Tuesday 10 March 2026 01:01:57 +0000 (0:00:00.339) 0:00:24.767 ********* 2026-03-10 01:03:50.195980 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.195985 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.195990 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:03:50.195995 | orchestrator | 2026-03-10 01:03:50.196000 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-10 01:03:50.196005 | orchestrator | Tuesday 10 March 2026 01:01:58 +0000 (0:00:00.348) 0:00:25.115 ********* 2026-03-10 01:03:50.196009 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.196014 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.196019 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.196024 | orchestrator | 2026-03-10 01:03:50.196029 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-10 01:03:50.196034 | orchestrator | Tuesday 10 March 2026 01:01:58 +0000 (0:00:00.753) 0:00:25.869 ********* 2026-03-10 01:03:50.196039 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:03:50.196044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:03:50.196058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:03:50.196063 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.196068 | orchestrator | 2026-03-10 01:03:50.196073 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-10 01:03:50.196077 | orchestrator | Tuesday 10 March 2026 01:01:59 +0000 (0:00:00.405) 0:00:26.274 ********* 2026-03-10 01:03:50.196082 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:03:50.196087 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:03:50.196091 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:03:50.196096 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.196101 | orchestrator | 2026-03-10 01:03:50.196106 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-10 01:03:50.196111 | orchestrator | Tuesday 10 March 2026 01:01:59 +0000 (0:00:00.394) 0:00:26.668 ********* 2026-03-10 01:03:50.196115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-10 01:03:50.196120 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-10 01:03:50.196125 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-10 01:03:50.196130 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.196135 | orchestrator | 2026-03-10 01:03:50.196140 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-10 01:03:50.196144 | orchestrator | Tuesday 10 March 2026 01:02:00 +0000 (0:00:00.407) 0:00:27.076 ********* 2026-03-10 01:03:50.196149 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:03:50.196154 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:03:50.196159 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:03:50.196164 | orchestrator | 2026-03-10 01:03:50.196169 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-10 01:03:50.196174 | orchestrator | Tuesday 10 March 2026 01:02:00 +0000 (0:00:00.341) 0:00:27.418 ********* 2026-03-10 01:03:50.196178 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-10 01:03:50.196183 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-10 01:03:50.196188 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-10 01:03:50.196193 | orchestrator | 2026-03-10 01:03:50.196198 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-10 01:03:50.196203 | orchestrator | Tuesday 10 March 2026 01:02:00 +0000 (0:00:00.512) 0:00:27.931 ********* 2026-03-10 01:03:50.196208 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 01:03:50.196213 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:03:50.196218 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:03:50.196223 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-10 01:03:50.196228 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-10 01:03:50.196233 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-10 01:03:50.196237 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-10 01:03:50.196243 | orchestrator | 2026-03-10 01:03:50.196249 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-10 01:03:50.196260 | orchestrator | Tuesday 10 March 2026 01:02:02 +0000 (0:00:01.069) 0:00:29.000 ********* 2026-03-10 01:03:50.196266 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-10 01:03:50.196271 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-10 01:03:50.196277 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-10 01:03:50.196282 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-10 01:03:50.196288 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-10 01:03:50.196298 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-10 01:03:50.196308 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-10 01:03:50.196314 | orchestrator | 2026-03-10 01:03:50.196320 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-10 01:03:50.196325 | orchestrator | Tuesday 10 March 2026 01:02:04 +0000 (0:00:02.117) 0:00:31.117 ********* 2026-03-10 01:03:50.196331 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:03:50.196336 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:03:50.196341 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-10 01:03:50.196345 | orchestrator | 2026-03-10 01:03:50.196350 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-10 01:03:50.196355 | orchestrator | Tuesday 10 March 2026 01:02:04 +0000 (0:00:00.414) 0:00:31.531 ********* 2026-03-10 01:03:50.196361 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 01:03:50.196367 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 01:03:50.196373 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 01:03:50.196378 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 01:03:50.196383 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-10 01:03:50.196388 | orchestrator | 2026-03-10 01:03:50.196393 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-10 01:03:50.196439 | orchestrator | Tuesday 10 March 2026 01:02:52 +0000 (0:00:47.620) 0:01:19.152 ********* 2026-03-10 01:03:50.196447 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196452 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196457 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196462 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196467 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196471 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196476 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-10 01:03:50.196481 | orchestrator | 2026-03-10 01:03:50.196486 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-10 01:03:50.196490 | orchestrator | Tuesday 10 March 2026 01:03:16 +0000 (0:00:23.959) 0:01:43.112 ********* 2026-03-10 01:03:50.196495 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196500 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196510 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196514 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196520 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196524 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196529 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-10 01:03:50.196534 | orchestrator | 2026-03-10 01:03:50.196539 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-10 01:03:50.196544 | orchestrator | Tuesday 10 March 2026 01:03:28 +0000 (0:00:11.992) 0:01:55.105 ********* 2026-03-10 01:03:50.196549 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196554 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:03:50.196559 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:03:50.196564 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196569 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:03:50.196579 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:03:50.196584 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196589 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:03:50.196593 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:03:50.196598 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196603 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:03:50.196608 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:03:50.196613 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196618 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:03:50.196623 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:03:50.196628 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-10 01:03:50.196633 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-10 01:03:50.196637 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-10 01:03:50.196642 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-10 01:03:50.196647 | orchestrator | 2026-03-10 01:03:50.196652 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:03:50.196657 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-10 01:03:50.196663 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-10 01:03:50.196668 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-10 01:03:50.196673 | orchestrator | 2026-03-10 01:03:50.196677 | orchestrator | 2026-03-10 01:03:50.196682 | orchestrator | 2026-03-10 01:03:50.196687 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:03:50.196692 | orchestrator | Tuesday 10 March 2026 01:03:46 +0000 (0:00:18.597) 0:02:13.702 ********* 2026-03-10 01:03:50.196696 | orchestrator | =============================================================================== 2026-03-10 01:03:50.196705 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.62s 2026-03-10 01:03:50.196710 | orchestrator | generate keys ---------------------------------------------------------- 23.96s 2026-03-10 01:03:50.196715 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.60s 2026-03-10 01:03:50.196720 | orchestrator | get keys from monitors ------------------------------------------------- 11.99s 2026-03-10 01:03:50.196725 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.19s 2026-03-10 01:03:50.196730 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.12s 2026-03-10 01:03:50.196734 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.71s 2026-03-10 01:03:50.196739 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 1.07s 2026-03-10 01:03:50.196744 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 1.01s 2026-03-10 01:03:50.196749 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.93s 2026-03-10 01:03:50.196754 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.88s 2026-03-10 01:03:50.196759 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.76s 2026-03-10 01:03:50.196764 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.75s 2026-03-10 01:03:50.196768 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.71s 2026-03-10 01:03:50.196796 | orchestrator | ceph-facts : Check for a ceph mon socket -------------------------------- 0.69s 2026-03-10 01:03:50.196802 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.69s 2026-03-10 01:03:50.196806 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.66s 2026-03-10 01:03:50.196811 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2026-03-10 01:03:50.196816 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.61s 2026-03-10 01:03:50.196826 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.59s 2026-03-10 01:03:50.196830 | orchestrator | 2026-03-10 01:03:50 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state STARTED 2026-03-10 01:03:50.196835 | orchestrator | 2026-03-10 01:03:50 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:50.199584 | orchestrator | 2026-03-10 01:03:50 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:50.199634 | orchestrator | 2026-03-10 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:53.246793 | orchestrator | 2026-03-10 01:03:53 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state STARTED 2026-03-10 01:03:53.249153 | orchestrator | 2026-03-10 01:03:53 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:53.251189 | orchestrator | 2026-03-10 01:03:53 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:53.251251 | orchestrator | 2026-03-10 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:56.295450 | orchestrator | 2026-03-10 01:03:56 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state STARTED 2026-03-10 01:03:56.297586 | orchestrator | 2026-03-10 01:03:56 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:56.299573 | orchestrator | 2026-03-10 01:03:56 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:56.299627 | orchestrator | 2026-03-10 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:03:59.339596 | orchestrator | 2026-03-10 01:03:59 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state STARTED 2026-03-10 01:03:59.340689 | orchestrator | 2026-03-10 01:03:59 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:03:59.342518 | orchestrator | 2026-03-10 01:03:59 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:03:59.342633 | orchestrator | 2026-03-10 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:02.389673 | orchestrator | 2026-03-10 01:04:02 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state STARTED 2026-03-10 01:04:02.390952 | orchestrator | 2026-03-10 01:04:02 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:02.393574 | orchestrator | 2026-03-10 01:04:02 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:04:02.393608 | orchestrator | 2026-03-10 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:05.444054 | orchestrator | 2026-03-10 01:04:05 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state STARTED 2026-03-10 01:04:05.445618 | orchestrator | 2026-03-10 01:04:05 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:05.447785 | orchestrator | 2026-03-10 01:04:05 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:04:05.448002 | orchestrator | 2026-03-10 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:08.502000 | orchestrator | 2026-03-10 01:04:08 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state STARTED 2026-03-10 01:04:08.502902 | orchestrator | 2026-03-10 01:04:08 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:08.504448 | orchestrator | 2026-03-10 01:04:08 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:04:08.504516 | orchestrator | 2026-03-10 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:11.552428 | orchestrator | 2026-03-10 01:04:11 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state STARTED 2026-03-10 01:04:11.554128 | orchestrator | 2026-03-10 01:04:11 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:11.557013 | orchestrator | 2026-03-10 01:04:11 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:04:11.557223 | orchestrator | 2026-03-10 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:14.605865 | orchestrator | 2026-03-10 01:04:14 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state STARTED 2026-03-10 01:04:14.606463 | orchestrator | 2026-03-10 01:04:14 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:14.608162 | orchestrator | 2026-03-10 01:04:14 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:04:14.608235 | orchestrator | 2026-03-10 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:17.660523 | orchestrator | 2026-03-10 01:04:17 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state STARTED 2026-03-10 01:04:17.661993 | orchestrator | 2026-03-10 01:04:17 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:17.664133 | orchestrator | 2026-03-10 01:04:17 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:04:17.664196 | orchestrator | 2026-03-10 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:20.705774 | orchestrator | 2026-03-10 01:04:20 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state STARTED 2026-03-10 01:04:20.707472 | orchestrator | 2026-03-10 01:04:20 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:20.709239 | orchestrator | 2026-03-10 01:04:20 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:04:20.709296 | orchestrator | 2026-03-10 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:23.754374 | orchestrator | 2026-03-10 01:04:23 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state STARTED 2026-03-10 01:04:23.754658 | orchestrator | 2026-03-10 01:04:23 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:23.756321 | orchestrator | 2026-03-10 01:04:23 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:04:23.756362 | orchestrator | 2026-03-10 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:26.804181 | orchestrator | 2026-03-10 01:04:26 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state STARTED 2026-03-10 01:04:26.805594 | orchestrator | 2026-03-10 01:04:26 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:26.808028 | orchestrator | 2026-03-10 01:04:26 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:04:26.808064 | orchestrator | 2026-03-10 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:29.846109 | orchestrator | 2026-03-10 01:04:29 | INFO  | Task 9790087e-6382-483f-8030-4d205c56eb31 is in state SUCCESS 2026-03-10 01:04:29.847341 | orchestrator | 2026-03-10 01:04:29 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:29.849605 | orchestrator | 2026-03-10 01:04:29 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:04:29.849656 | orchestrator | 2026-03-10 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:32.912666 | orchestrator | 2026-03-10 01:04:32 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:04:32.916268 | orchestrator | 2026-03-10 01:04:32 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:32.918915 | orchestrator | 2026-03-10 01:04:32 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state STARTED 2026-03-10 01:04:32.919141 | orchestrator | 2026-03-10 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:35.974380 | orchestrator | 2026-03-10 01:04:35 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:04:35.974986 | orchestrator | 2026-03-10 01:04:35 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:35.976910 | orchestrator | 2026-03-10 01:04:35 | INFO  | Task 10e08322-2788-4a08-97d5-eea83d5b854f is in state SUCCESS 2026-03-10 01:04:35.977112 | orchestrator | 2026-03-10 01:04:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:35.978989 | orchestrator | 2026-03-10 01:04:35.979035 | orchestrator | 2026-03-10 01:04:35.979047 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-10 01:04:35.979058 | orchestrator | 2026-03-10 01:04:35.979069 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-10 01:04:35.979081 | orchestrator | Tuesday 10 March 2026 01:03:52 +0000 (0:00:00.193) 0:00:00.193 ********* 2026-03-10 01:04:35.979092 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-10 01:04:35.979191 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-10 01:04:35.979203 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-10 01:04:35.979214 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:04:35.979556 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-10 01:04:35.979587 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-10 01:04:35.979599 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-10 01:04:35.979610 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-10 01:04:35.979621 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-10 01:04:35.979631 | orchestrator | 2026-03-10 01:04:35.979642 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-10 01:04:35.979653 | orchestrator | Tuesday 10 March 2026 01:03:56 +0000 (0:00:04.688) 0:00:04.882 ********* 2026-03-10 01:04:35.979664 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-10 01:04:35.979674 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-10 01:04:35.979685 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-10 01:04:35.979695 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:04:35.979706 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-10 01:04:35.979716 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-10 01:04:35.979727 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-10 01:04:35.979737 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-10 01:04:35.979748 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-10 01:04:35.979759 | orchestrator | 2026-03-10 01:04:35.979769 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-10 01:04:35.979780 | orchestrator | Tuesday 10 March 2026 01:04:01 +0000 (0:00:04.440) 0:00:09.322 ********* 2026-03-10 01:04:35.979791 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-10 01:04:35.979802 | orchestrator | 2026-03-10 01:04:35.979813 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-10 01:04:35.979824 | orchestrator | Tuesday 10 March 2026 01:04:02 +0000 (0:00:01.308) 0:00:10.631 ********* 2026-03-10 01:04:35.979835 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-10 01:04:35.979846 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-10 01:04:35.979856 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-10 01:04:35.979867 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:04:35.979878 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-10 01:04:35.979888 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-10 01:04:35.979899 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-10 01:04:35.979909 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-10 01:04:35.979920 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-10 01:04:35.979930 | orchestrator | 2026-03-10 01:04:35.979941 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-10 01:04:35.979951 | orchestrator | Tuesday 10 March 2026 01:04:18 +0000 (0:00:16.141) 0:00:26.772 ********* 2026-03-10 01:04:35.979962 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-10 01:04:35.979983 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-10 01:04:35.979994 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-10 01:04:35.980005 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-10 01:04:35.980027 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-10 01:04:35.980038 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-10 01:04:35.980050 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-10 01:04:35.980060 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-10 01:04:35.980071 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-10 01:04:35.980082 | orchestrator | 2026-03-10 01:04:35.980093 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-10 01:04:35.980103 | orchestrator | Tuesday 10 March 2026 01:04:22 +0000 (0:00:03.422) 0:00:30.195 ********* 2026-03-10 01:04:35.980115 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-10 01:04:35.980126 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-10 01:04:35.980142 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-10 01:04:35.980153 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:04:35.980164 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-10 01:04:35.980175 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-10 01:04:35.980185 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-10 01:04:35.980196 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-10 01:04:35.980206 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-10 01:04:35.980217 | orchestrator | 2026-03-10 01:04:35.980227 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:04:35.980244 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:04:35.980263 | orchestrator | 2026-03-10 01:04:35.980281 | orchestrator | 2026-03-10 01:04:35.980300 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:04:35.980318 | orchestrator | Tuesday 10 March 2026 01:04:29 +0000 (0:00:07.253) 0:00:37.448 ********* 2026-03-10 01:04:35.980335 | orchestrator | =============================================================================== 2026-03-10 01:04:35.980353 | orchestrator | Write ceph keys to the share directory --------------------------------- 16.14s 2026-03-10 01:04:35.980371 | orchestrator | Write ceph keys to the configuration directory -------------------------- 7.25s 2026-03-10 01:04:35.980415 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.69s 2026-03-10 01:04:35.980434 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.44s 2026-03-10 01:04:35.980453 | orchestrator | Check if target directories exist --------------------------------------- 3.42s 2026-03-10 01:04:35.980471 | orchestrator | Create share directory -------------------------------------------------- 1.31s 2026-03-10 01:04:35.980489 | orchestrator | 2026-03-10 01:04:35.980503 | orchestrator | 2026-03-10 01:04:35.980514 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:04:35.980525 | orchestrator | 2026-03-10 01:04:35.980536 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:04:35.980547 | orchestrator | Tuesday 10 March 2026 01:02:38 +0000 (0:00:00.351) 0:00:00.351 ********* 2026-03-10 01:04:35.980558 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:04:35.980577 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:04:35.980588 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:04:35.980598 | orchestrator | 2026-03-10 01:04:35.980609 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:04:35.980620 | orchestrator | Tuesday 10 March 2026 01:02:39 +0000 (0:00:00.409) 0:00:00.761 ********* 2026-03-10 01:04:35.980630 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-10 01:04:35.980641 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-10 01:04:35.980652 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-10 01:04:35.980663 | orchestrator | 2026-03-10 01:04:35.980674 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-10 01:04:35.980684 | orchestrator | 2026-03-10 01:04:35.980695 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-10 01:04:35.980706 | orchestrator | Tuesday 10 March 2026 01:02:39 +0000 (0:00:00.380) 0:00:01.141 ********* 2026-03-10 01:04:35.980716 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:04:35.980727 | orchestrator | 2026-03-10 01:04:35.980738 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-10 01:04:35.980749 | orchestrator | Tuesday 10 March 2026 01:02:40 +0000 (0:00:00.448) 0:00:01.590 ********* 2026-03-10 01:04:35.980790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:04:35.980808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:04:35.980844 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:04:35.980858 | orchestrator | 2026-03-10 01:04:35.980869 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-10 01:04:35.980880 | orchestrator | Tuesday 10 March 2026 01:02:41 +0000 (0:00:01.047) 0:00:02.637 ********* 2026-03-10 01:04:35.980897 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:04:35.980908 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:04:35.980919 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:04:35.980930 | orchestrator | 2026-03-10 01:04:35.980940 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-10 01:04:35.980951 | orchestrator | Tuesday 10 March 2026 01:02:41 +0000 (0:00:00.528) 0:00:03.166 ********* 2026-03-10 01:04:35.980962 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-10 01:04:35.980973 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-10 01:04:35.980983 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-10 01:04:35.980994 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-10 01:04:35.981004 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-10 01:04:35.981015 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-10 01:04:35.981026 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-10 01:04:35.981036 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-10 01:04:35.981047 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-10 01:04:35.981058 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-10 01:04:35.981068 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-10 01:04:35.981079 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-10 01:04:35.981090 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-10 01:04:35.981101 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-10 01:04:35.981111 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-10 01:04:35.981122 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-10 01:04:35.981132 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-10 01:04:35.981143 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-10 01:04:35.981154 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-10 01:04:35.981165 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-10 01:04:35.981175 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-10 01:04:35.981186 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-10 01:04:35.981203 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-10 01:04:35.981214 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-10 01:04:35.981225 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-10 01:04:35.981238 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-10 01:04:35.981249 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-10 01:04:35.981260 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-10 01:04:35.981276 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-10 01:04:35.981302 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-10 01:04:35.981313 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-10 01:04:35.981324 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-10 01:04:35.981335 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-10 01:04:35.981345 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-10 01:04:35.981356 | orchestrator | 2026-03-10 01:04:35.981367 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:04:35.981377 | orchestrator | Tuesday 10 March 2026 01:02:42 +0000 (0:00:00.945) 0:00:04.111 ********* 2026-03-10 01:04:35.981433 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:04:35.981452 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:04:35.981470 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:04:35.981488 | orchestrator | 2026-03-10 01:04:35.981506 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:04:35.981523 | orchestrator | Tuesday 10 March 2026 01:02:43 +0000 (0:00:00.368) 0:00:04.479 ********* 2026-03-10 01:04:35.981540 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.981556 | orchestrator | 2026-03-10 01:04:35.981572 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:04:35.981591 | orchestrator | Tuesday 10 March 2026 01:02:43 +0000 (0:00:00.129) 0:00:04.608 ********* 2026-03-10 01:04:35.981609 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.981627 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.981645 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.981664 | orchestrator | 2026-03-10 01:04:35.981683 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:04:35.981701 | orchestrator | Tuesday 10 March 2026 01:02:43 +0000 (0:00:00.583) 0:00:05.192 ********* 2026-03-10 01:04:35.981720 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:04:35.981738 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:04:35.981750 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:04:35.981760 | orchestrator | 2026-03-10 01:04:35.981771 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:04:35.981782 | orchestrator | Tuesday 10 March 2026 01:02:44 +0000 (0:00:00.358) 0:00:05.551 ********* 2026-03-10 01:04:35.981792 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.981803 | orchestrator | 2026-03-10 01:04:35.981813 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:04:35.981824 | orchestrator | Tuesday 10 March 2026 01:02:44 +0000 (0:00:00.150) 0:00:05.701 ********* 2026-03-10 01:04:35.981834 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.981845 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.981856 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.981866 | orchestrator | 2026-03-10 01:04:35.981877 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:04:35.981887 | orchestrator | Tuesday 10 March 2026 01:02:44 +0000 (0:00:00.324) 0:00:06.026 ********* 2026-03-10 01:04:35.981898 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:04:35.981909 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:04:35.981919 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:04:35.981930 | orchestrator | 2026-03-10 01:04:35.981941 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:04:35.981961 | orchestrator | Tuesday 10 March 2026 01:02:44 +0000 (0:00:00.307) 0:00:06.334 ********* 2026-03-10 01:04:35.981972 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.981983 | orchestrator | 2026-03-10 01:04:35.981993 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:04:35.982004 | orchestrator | Tuesday 10 March 2026 01:02:45 +0000 (0:00:00.369) 0:00:06.703 ********* 2026-03-10 01:04:35.982015 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.982083 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.982094 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.982105 | orchestrator | 2026-03-10 01:04:35.982116 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:04:35.982136 | orchestrator | Tuesday 10 March 2026 01:02:45 +0000 (0:00:00.338) 0:00:07.042 ********* 2026-03-10 01:04:35.982147 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:04:35.982158 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:04:35.982169 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:04:35.982180 | orchestrator | 2026-03-10 01:04:35.982190 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:04:35.982201 | orchestrator | Tuesday 10 March 2026 01:02:46 +0000 (0:00:00.358) 0:00:07.401 ********* 2026-03-10 01:04:35.982212 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.982223 | orchestrator | 2026-03-10 01:04:35.982233 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:04:35.982244 | orchestrator | Tuesday 10 March 2026 01:02:46 +0000 (0:00:00.157) 0:00:07.559 ********* 2026-03-10 01:04:35.982255 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.982265 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.982276 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.982286 | orchestrator | 2026-03-10 01:04:35.982297 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:04:35.982307 | orchestrator | Tuesday 10 March 2026 01:02:46 +0000 (0:00:00.310) 0:00:07.869 ********* 2026-03-10 01:04:35.982318 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:04:35.982336 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:04:35.982347 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:04:35.982358 | orchestrator | 2026-03-10 01:04:35.982368 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:04:35.982379 | orchestrator | Tuesday 10 March 2026 01:02:47 +0000 (0:00:00.685) 0:00:08.555 ********* 2026-03-10 01:04:35.982451 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.982463 | orchestrator | 2026-03-10 01:04:35.982474 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:04:35.982484 | orchestrator | Tuesday 10 March 2026 01:02:47 +0000 (0:00:00.139) 0:00:08.694 ********* 2026-03-10 01:04:35.982495 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.982506 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.982516 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.982527 | orchestrator | 2026-03-10 01:04:35.982538 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:04:35.982548 | orchestrator | Tuesday 10 March 2026 01:02:47 +0000 (0:00:00.310) 0:00:09.005 ********* 2026-03-10 01:04:35.982559 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:04:35.982570 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:04:35.982580 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:04:35.982591 | orchestrator | 2026-03-10 01:04:35.982602 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:04:35.982613 | orchestrator | Tuesday 10 March 2026 01:02:47 +0000 (0:00:00.357) 0:00:09.363 ********* 2026-03-10 01:04:35.982623 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.982634 | orchestrator | 2026-03-10 01:04:35.982645 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:04:35.982655 | orchestrator | Tuesday 10 March 2026 01:02:48 +0000 (0:00:00.155) 0:00:09.518 ********* 2026-03-10 01:04:35.982666 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.982685 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.982696 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.982706 | orchestrator | 2026-03-10 01:04:35.982717 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:04:35.982728 | orchestrator | Tuesday 10 March 2026 01:02:48 +0000 (0:00:00.327) 0:00:09.846 ********* 2026-03-10 01:04:35.982738 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:04:35.982749 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:04:35.982760 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:04:35.982771 | orchestrator | 2026-03-10 01:04:35.982781 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:04:35.982792 | orchestrator | Tuesday 10 March 2026 01:02:49 +0000 (0:00:00.603) 0:00:10.449 ********* 2026-03-10 01:04:35.982803 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.982814 | orchestrator | 2026-03-10 01:04:35.982824 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:04:35.982835 | orchestrator | Tuesday 10 March 2026 01:02:49 +0000 (0:00:00.147) 0:00:10.597 ********* 2026-03-10 01:04:35.982846 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.982855 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.982865 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.982874 | orchestrator | 2026-03-10 01:04:35.982884 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:04:35.982893 | orchestrator | Tuesday 10 March 2026 01:02:49 +0000 (0:00:00.307) 0:00:10.904 ********* 2026-03-10 01:04:35.982903 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:04:35.982912 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:04:35.982922 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:04:35.982931 | orchestrator | 2026-03-10 01:04:35.982941 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:04:35.982950 | orchestrator | Tuesday 10 March 2026 01:02:49 +0000 (0:00:00.402) 0:00:11.306 ********* 2026-03-10 01:04:35.982960 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.982969 | orchestrator | 2026-03-10 01:04:35.982979 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:04:35.982988 | orchestrator | Tuesday 10 March 2026 01:02:50 +0000 (0:00:00.158) 0:00:11.465 ********* 2026-03-10 01:04:35.982998 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.983007 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.983016 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.983026 | orchestrator | 2026-03-10 01:04:35.983035 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:04:35.983045 | orchestrator | Tuesday 10 March 2026 01:02:50 +0000 (0:00:00.668) 0:00:12.134 ********* 2026-03-10 01:04:35.983054 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:04:35.983064 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:04:35.983073 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:04:35.983083 | orchestrator | 2026-03-10 01:04:35.983092 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:04:35.983102 | orchestrator | Tuesday 10 March 2026 01:02:51 +0000 (0:00:00.374) 0:00:12.508 ********* 2026-03-10 01:04:35.983111 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.983121 | orchestrator | 2026-03-10 01:04:35.983136 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:04:35.983146 | orchestrator | Tuesday 10 March 2026 01:02:51 +0000 (0:00:00.146) 0:00:12.655 ********* 2026-03-10 01:04:35.983156 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.983166 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.983175 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.983184 | orchestrator | 2026-03-10 01:04:35.983194 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-10 01:04:35.983204 | orchestrator | Tuesday 10 March 2026 01:02:51 +0000 (0:00:00.325) 0:00:12.980 ********* 2026-03-10 01:04:35.983213 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:04:35.983223 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:04:35.983239 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:04:35.983256 | orchestrator | 2026-03-10 01:04:35.983273 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-10 01:04:35.983287 | orchestrator | Tuesday 10 March 2026 01:02:52 +0000 (0:00:00.410) 0:00:13.391 ********* 2026-03-10 01:04:35.983302 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.983320 | orchestrator | 2026-03-10 01:04:35.983338 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-10 01:04:35.983362 | orchestrator | Tuesday 10 March 2026 01:02:52 +0000 (0:00:00.152) 0:00:13.543 ********* 2026-03-10 01:04:35.983372 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.983405 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.983418 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.983428 | orchestrator | 2026-03-10 01:04:35.983437 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-10 01:04:35.983446 | orchestrator | Tuesday 10 March 2026 01:02:52 +0000 (0:00:00.658) 0:00:14.202 ********* 2026-03-10 01:04:35.983456 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:04:35.983466 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:04:35.983475 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:04:35.983484 | orchestrator | 2026-03-10 01:04:35.983494 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-10 01:04:35.983503 | orchestrator | Tuesday 10 March 2026 01:02:54 +0000 (0:00:01.894) 0:00:16.096 ********* 2026-03-10 01:04:35.983513 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-10 01:04:35.983523 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-10 01:04:35.983532 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-10 01:04:35.983541 | orchestrator | 2026-03-10 01:04:35.983551 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-10 01:04:35.983560 | orchestrator | Tuesday 10 March 2026 01:02:56 +0000 (0:00:02.138) 0:00:18.235 ********* 2026-03-10 01:04:35.983570 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-10 01:04:35.983579 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-10 01:04:35.983589 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-10 01:04:35.983599 | orchestrator | 2026-03-10 01:04:35.983608 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-10 01:04:35.983617 | orchestrator | Tuesday 10 March 2026 01:02:59 +0000 (0:00:02.561) 0:00:20.797 ********* 2026-03-10 01:04:35.983627 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-10 01:04:35.983636 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-10 01:04:35.983646 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-10 01:04:35.983655 | orchestrator | 2026-03-10 01:04:35.983665 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-10 01:04:35.983674 | orchestrator | Tuesday 10 March 2026 01:03:01 +0000 (0:00:02.434) 0:00:23.231 ********* 2026-03-10 01:04:35.983684 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.983693 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.983707 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.983723 | orchestrator | 2026-03-10 01:04:35.983739 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-10 01:04:35.983755 | orchestrator | Tuesday 10 March 2026 01:03:02 +0000 (0:00:00.333) 0:00:23.565 ********* 2026-03-10 01:04:35.983771 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.983788 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.983804 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.983832 | orchestrator | 2026-03-10 01:04:35.983848 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-10 01:04:35.983865 | orchestrator | Tuesday 10 March 2026 01:03:02 +0000 (0:00:00.308) 0:00:23.874 ********* 2026-03-10 01:04:35.983876 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:04:35.983886 | orchestrator | 2026-03-10 01:04:35.983895 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-10 01:04:35.983904 | orchestrator | Tuesday 10 March 2026 01:03:03 +0000 (0:00:00.811) 0:00:24.685 ********* 2026-03-10 01:04:35.983935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:04:35.983948 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:04:35.983981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:04:35.983992 | orchestrator | 2026-03-10 01:04:35.984003 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-10 01:04:35.984012 | orchestrator | Tuesday 10 March 2026 01:03:04 +0000 (0:00:01.674) 0:00:26.360 ********* 2026-03-10 01:04:35.984029 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 01:04:35.984052 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.984069 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 01:04:35.984080 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.984098 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 01:04:35.984115 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.984125 | orchestrator | 2026-03-10 01:04:35.984135 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-10 01:04:35.984144 | orchestrator | Tuesday 10 March 2026 01:03:05 +0000 (0:00:00.709) 0:00:27.070 ********* 2026-03-10 01:04:35.984159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 01:04:35.984176 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.984204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 01:04:35.984215 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.984225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-10 01:04:35.984241 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.984251 | orchestrator | 2026-03-10 01:04:35.984260 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-10 01:04:35.984270 | orchestrator | Tuesday 10 March 2026 01:03:06 +0000 (0:00:00.906) 0:00:27.977 ********* 2026-03-10 01:04:35.984293 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:04:35.984305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:04:35.984336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-10 01:04:35.984347 | orchestrator | 2026-03-10 01:04:35.984357 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-10 01:04:35.984367 | orchestrator | Tuesday 10 March 2026 01:03:08 +0000 (0:00:01.579) 0:00:29.556 ********* 2026-03-10 01:04:35.984376 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:04:35.984409 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:04:35.984420 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:04:35.984430 | orchestrator | 2026-03-10 01:04:35.984439 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-10 01:04:35.984449 | orchestrator | Tuesday 10 March 2026 01:03:08 +0000 (0:00:00.398) 0:00:29.955 ********* 2026-03-10 01:04:35.984465 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:04:35.984474 | orchestrator | 2026-03-10 01:04:35.984484 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-10 01:04:35.984494 | orchestrator | Tuesday 10 March 2026 01:03:09 +0000 (0:00:00.592) 0:00:30.548 ********* 2026-03-10 01:04:35.984503 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:04:35.984513 | orchestrator | 2026-03-10 01:04:35.984522 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-10 01:04:35.984532 | orchestrator | Tuesday 10 March 2026 01:03:11 +0000 (0:00:02.723) 0:00:33.271 ********* 2026-03-10 01:04:35.984542 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:04:35.984551 | orchestrator | 2026-03-10 01:04:35.984561 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-10 01:04:35.984570 | orchestrator | Tuesday 10 March 2026 01:03:14 +0000 (0:00:02.935) 0:00:36.206 ********* 2026-03-10 01:04:35.984580 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:04:35.984589 | orchestrator | 2026-03-10 01:04:35.984599 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-10 01:04:35.984608 | orchestrator | Tuesday 10 March 2026 01:03:31 +0000 (0:00:17.146) 0:00:53.353 ********* 2026-03-10 01:04:35.984618 | orchestrator | 2026-03-10 01:04:35.984627 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-10 01:04:35.984637 | orchestrator | Tuesday 10 March 2026 01:03:32 +0000 (0:00:00.095) 0:00:53.449 ********* 2026-03-10 01:04:35.984646 | orchestrator | 2026-03-10 01:04:35.984656 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-10 01:04:35.984665 | orchestrator | Tuesday 10 March 2026 01:03:32 +0000 (0:00:00.075) 0:00:53.524 ********* 2026-03-10 01:04:35.984675 | orchestrator | 2026-03-10 01:04:35.984684 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-10 01:04:35.984694 | orchestrator | Tuesday 10 March 2026 01:03:32 +0000 (0:00:00.071) 0:00:53.595 ********* 2026-03-10 01:04:35.984703 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:04:35.984713 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:04:35.984722 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:04:35.984732 | orchestrator | 2026-03-10 01:04:35.984742 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:04:35.984751 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-10 01:04:35.984761 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-10 01:04:35.984771 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-10 01:04:35.984781 | orchestrator | 2026-03-10 01:04:35.984791 | orchestrator | 2026-03-10 01:04:35.984806 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:04:35.984816 | orchestrator | Tuesday 10 March 2026 01:04:32 +0000 (0:01:00.415) 0:01:54.012 ********* 2026-03-10 01:04:35.984825 | orchestrator | =============================================================================== 2026-03-10 01:04:35.984841 | orchestrator | horizon : Restart horizon container ------------------------------------ 60.42s 2026-03-10 01:04:35.984857 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 17.15s 2026-03-10 01:04:35.984873 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.94s 2026-03-10 01:04:35.984888 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.72s 2026-03-10 01:04:35.984905 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.56s 2026-03-10 01:04:35.984920 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.43s 2026-03-10 01:04:35.984944 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.14s 2026-03-10 01:04:35.984966 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.89s 2026-03-10 01:04:35.984981 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.67s 2026-03-10 01:04:35.984995 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.58s 2026-03-10 01:04:35.985009 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.05s 2026-03-10 01:04:35.985023 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.95s 2026-03-10 01:04:35.985038 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.91s 2026-03-10 01:04:35.985054 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.81s 2026-03-10 01:04:35.985069 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.71s 2026-03-10 01:04:35.985084 | orchestrator | horizon : Update policy file name --------------------------------------- 0.69s 2026-03-10 01:04:35.985099 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.67s 2026-03-10 01:04:35.985115 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.66s 2026-03-10 01:04:35.985131 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2026-03-10 01:04:35.985147 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.59s 2026-03-10 01:04:39.021853 | orchestrator | 2026-03-10 01:04:39 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:04:39.023459 | orchestrator | 2026-03-10 01:04:39 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:39.023535 | orchestrator | 2026-03-10 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:42.071374 | orchestrator | 2026-03-10 01:04:42 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:04:42.072765 | orchestrator | 2026-03-10 01:04:42 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:42.072813 | orchestrator | 2026-03-10 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:45.120594 | orchestrator | 2026-03-10 01:04:45 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:04:45.123806 | orchestrator | 2026-03-10 01:04:45 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:45.123883 | orchestrator | 2026-03-10 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:48.176410 | orchestrator | 2026-03-10 01:04:48 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:04:48.178592 | orchestrator | 2026-03-10 01:04:48 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:48.178648 | orchestrator | 2026-03-10 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:51.240189 | orchestrator | 2026-03-10 01:04:51 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:04:51.243183 | orchestrator | 2026-03-10 01:04:51 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:51.244058 | orchestrator | 2026-03-10 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:54.297123 | orchestrator | 2026-03-10 01:04:54 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:04:54.299581 | orchestrator | 2026-03-10 01:04:54 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:54.299645 | orchestrator | 2026-03-10 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:04:57.346322 | orchestrator | 2026-03-10 01:04:57 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:04:57.346841 | orchestrator | 2026-03-10 01:04:57 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:04:57.346860 | orchestrator | 2026-03-10 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:00.386152 | orchestrator | 2026-03-10 01:05:00 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:05:00.386478 | orchestrator | 2026-03-10 01:05:00 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:05:00.386521 | orchestrator | 2026-03-10 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:03.440723 | orchestrator | 2026-03-10 01:05:03 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:05:03.441938 | orchestrator | 2026-03-10 01:05:03 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:05:03.442443 | orchestrator | 2026-03-10 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:06.496562 | orchestrator | 2026-03-10 01:05:06 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:05:06.497791 | orchestrator | 2026-03-10 01:05:06 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:05:06.497829 | orchestrator | 2026-03-10 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:09.548115 | orchestrator | 2026-03-10 01:05:09 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:05:09.550261 | orchestrator | 2026-03-10 01:05:09 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:05:09.550353 | orchestrator | 2026-03-10 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:12.592445 | orchestrator | 2026-03-10 01:05:12 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:05:12.594894 | orchestrator | 2026-03-10 01:05:12 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:05:12.594959 | orchestrator | 2026-03-10 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:15.634626 | orchestrator | 2026-03-10 01:05:15 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:05:15.636017 | orchestrator | 2026-03-10 01:05:15 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:05:15.636114 | orchestrator | 2026-03-10 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:18.685865 | orchestrator | 2026-03-10 01:05:18 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:05:18.687416 | orchestrator | 2026-03-10 01:05:18 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:05:18.687446 | orchestrator | 2026-03-10 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:21.727841 | orchestrator | 2026-03-10 01:05:21 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:05:21.731152 | orchestrator | 2026-03-10 01:05:21 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:05:21.731229 | orchestrator | 2026-03-10 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:24.782940 | orchestrator | 2026-03-10 01:05:24 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:05:24.784606 | orchestrator | 2026-03-10 01:05:24 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:05:24.784716 | orchestrator | 2026-03-10 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:27.831589 | orchestrator | 2026-03-10 01:05:27 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:05:27.834223 | orchestrator | 2026-03-10 01:05:27 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:05:27.834305 | orchestrator | 2026-03-10 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:30.883749 | orchestrator | 2026-03-10 01:05:30 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state STARTED 2026-03-10 01:05:30.885930 | orchestrator | 2026-03-10 01:05:30 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:05:30.886000 | orchestrator | 2026-03-10 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:33.933960 | orchestrator | 2026-03-10 01:05:33 | INFO  | Task cefea592-6174-45ed-b623-b36383f362a2 is in state SUCCESS 2026-03-10 01:05:33.934105 | orchestrator | 2026-03-10 01:05:33 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:05:33.934122 | orchestrator | 2026-03-10 01:05:33 | INFO  | Task 89cb964f-d603-48c9-94cb-b17f62caaafb is in state STARTED 2026-03-10 01:05:33.934130 | orchestrator | 2026-03-10 01:05:33 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state STARTED 2026-03-10 01:05:33.934137 | orchestrator | 2026-03-10 01:05:33 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:05:33.934145 | orchestrator | 2026-03-10 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:36.978707 | orchestrator | 2026-03-10 01:05:36 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:05:36.980762 | orchestrator | 2026-03-10 01:05:36 | INFO  | Task 89cb964f-d603-48c9-94cb-b17f62caaafb is in state STARTED 2026-03-10 01:05:36.985167 | orchestrator | 2026-03-10 01:05:36 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:05:36.987075 | orchestrator | 2026-03-10 01:05:36 | INFO  | Task 5826f2df-94f6-4605-96fe-424a49b4060f is in state SUCCESS 2026-03-10 01:05:36.988593 | orchestrator | 2026-03-10 01:05:36.988679 | orchestrator | 2026-03-10 01:05:36.988686 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-10 01:05:36.988693 | orchestrator | 2026-03-10 01:05:36.988700 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-10 01:05:36.988708 | orchestrator | Tuesday 10 March 2026 01:04:34 +0000 (0:00:00.269) 0:00:00.269 ********* 2026-03-10 01:05:36.988716 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-10 01:05:36.988738 | orchestrator | 2026-03-10 01:05:36.988745 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-10 01:05:36.988751 | orchestrator | Tuesday 10 March 2026 01:04:34 +0000 (0:00:00.253) 0:00:00.522 ********* 2026-03-10 01:05:36.988758 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-10 01:05:36.988766 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-10 01:05:36.988773 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-10 01:05:36.988780 | orchestrator | 2026-03-10 01:05:36.988785 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-10 01:05:36.988789 | orchestrator | Tuesday 10 March 2026 01:04:35 +0000 (0:00:01.356) 0:00:01.879 ********* 2026-03-10 01:05:36.988852 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-10 01:05:36.988856 | orchestrator | 2026-03-10 01:05:36.988860 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-10 01:05:36.988883 | orchestrator | Tuesday 10 March 2026 01:04:37 +0000 (0:00:01.600) 0:00:03.479 ********* 2026-03-10 01:05:36.988887 | orchestrator | changed: [testbed-manager] 2026-03-10 01:05:36.988891 | orchestrator | 2026-03-10 01:05:36.988895 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-10 01:05:36.988899 | orchestrator | Tuesday 10 March 2026 01:04:38 +0000 (0:00:00.987) 0:00:04.466 ********* 2026-03-10 01:05:36.988903 | orchestrator | changed: [testbed-manager] 2026-03-10 01:05:36.988906 | orchestrator | 2026-03-10 01:05:36.988910 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-10 01:05:36.988914 | orchestrator | Tuesday 10 March 2026 01:04:39 +0000 (0:00:01.025) 0:00:05.492 ********* 2026-03-10 01:05:36.988918 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-10 01:05:36.988923 | orchestrator | ok: [testbed-manager] 2026-03-10 01:05:36.988927 | orchestrator | 2026-03-10 01:05:36.988931 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-10 01:05:36.988934 | orchestrator | Tuesday 10 March 2026 01:05:21 +0000 (0:00:41.650) 0:00:47.143 ********* 2026-03-10 01:05:36.988938 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-10 01:05:36.988964 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-10 01:05:36.988970 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-10 01:05:36.988974 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-10 01:05:36.988977 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-10 01:05:36.988981 | orchestrator | 2026-03-10 01:05:36.988985 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-10 01:05:36.988989 | orchestrator | Tuesday 10 March 2026 01:05:25 +0000 (0:00:04.365) 0:00:51.508 ********* 2026-03-10 01:05:36.988993 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-10 01:05:36.988996 | orchestrator | 2026-03-10 01:05:36.989000 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-10 01:05:36.989005 | orchestrator | Tuesday 10 March 2026 01:05:25 +0000 (0:00:00.477) 0:00:51.986 ********* 2026-03-10 01:05:36.989008 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:05:36.989036 | orchestrator | 2026-03-10 01:05:36.989040 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-10 01:05:36.989044 | orchestrator | Tuesday 10 March 2026 01:05:26 +0000 (0:00:00.158) 0:00:52.144 ********* 2026-03-10 01:05:36.989047 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:05:36.989051 | orchestrator | 2026-03-10 01:05:36.989055 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-10 01:05:36.989059 | orchestrator | Tuesday 10 March 2026 01:05:26 +0000 (0:00:00.529) 0:00:52.674 ********* 2026-03-10 01:05:36.989062 | orchestrator | changed: [testbed-manager] 2026-03-10 01:05:36.989268 | orchestrator | 2026-03-10 01:05:36.989281 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-10 01:05:36.989287 | orchestrator | Tuesday 10 March 2026 01:05:28 +0000 (0:00:01.552) 0:00:54.228 ********* 2026-03-10 01:05:36.989294 | orchestrator | changed: [testbed-manager] 2026-03-10 01:05:36.989299 | orchestrator | 2026-03-10 01:05:36.989305 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-10 01:05:36.989311 | orchestrator | Tuesday 10 March 2026 01:05:29 +0000 (0:00:00.788) 0:00:55.017 ********* 2026-03-10 01:05:36.989317 | orchestrator | changed: [testbed-manager] 2026-03-10 01:05:36.989323 | orchestrator | 2026-03-10 01:05:36.989329 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-10 01:05:36.989336 | orchestrator | Tuesday 10 March 2026 01:05:29 +0000 (0:00:00.596) 0:00:55.613 ********* 2026-03-10 01:05:36.989404 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-10 01:05:36.989411 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-10 01:05:36.989415 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-10 01:05:36.989419 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-10 01:05:36.989434 | orchestrator | 2026-03-10 01:05:36.989440 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:05:36.989490 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-10 01:05:36.989500 | orchestrator | 2026-03-10 01:05:36.989508 | orchestrator | 2026-03-10 01:05:36.989542 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:05:36.989547 | orchestrator | Tuesday 10 March 2026 01:05:31 +0000 (0:00:01.653) 0:00:57.267 ********* 2026-03-10 01:05:36.989565 | orchestrator | =============================================================================== 2026-03-10 01:05:36.989569 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 41.65s 2026-03-10 01:05:36.989573 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.37s 2026-03-10 01:05:36.989576 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.65s 2026-03-10 01:05:36.989580 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.60s 2026-03-10 01:05:36.989584 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.55s 2026-03-10 01:05:36.989588 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.36s 2026-03-10 01:05:36.989591 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.03s 2026-03-10 01:05:36.989595 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.99s 2026-03-10 01:05:36.989599 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.79s 2026-03-10 01:05:36.989602 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.60s 2026-03-10 01:05:36.989606 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.53s 2026-03-10 01:05:36.989610 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.48s 2026-03-10 01:05:36.989614 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.25s 2026-03-10 01:05:36.989618 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.16s 2026-03-10 01:05:36.989621 | orchestrator | 2026-03-10 01:05:36.989625 | orchestrator | 2026-03-10 01:05:36.989629 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:05:36.989632 | orchestrator | 2026-03-10 01:05:36.989636 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:05:36.989640 | orchestrator | Tuesday 10 March 2026 01:02:39 +0000 (0:00:00.258) 0:00:00.258 ********* 2026-03-10 01:05:36.989644 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:05:36.989648 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:05:36.989653 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:05:36.989659 | orchestrator | 2026-03-10 01:05:36.989668 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:05:36.989675 | orchestrator | Tuesday 10 March 2026 01:02:39 +0000 (0:00:00.283) 0:00:00.541 ********* 2026-03-10 01:05:36.989681 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-10 01:05:36.989686 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-10 01:05:36.989692 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-10 01:05:36.989698 | orchestrator | 2026-03-10 01:05:36.989704 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-10 01:05:36.989710 | orchestrator | 2026-03-10 01:05:36.989716 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-10 01:05:36.989721 | orchestrator | Tuesday 10 March 2026 01:02:40 +0000 (0:00:00.378) 0:00:00.920 ********* 2026-03-10 01:05:36.989727 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:05:36.989734 | orchestrator | 2026-03-10 01:05:36.989740 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-10 01:05:36.989756 | orchestrator | Tuesday 10 March 2026 01:02:40 +0000 (0:00:00.577) 0:00:01.497 ********* 2026-03-10 01:05:36.989766 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.989811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.989842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.989850 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:05:36.989858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:05:36.989871 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:05:36.989878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.989897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.989904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.989911 | orchestrator | 2026-03-10 01:05:36.989917 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-10 01:05:36.989922 | orchestrator | Tuesday 10 March 2026 01:02:42 +0000 (0:00:01.835) 0:00:03.333 ********* 2026-03-10 01:05:36.989928 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.989936 | orchestrator | 2026-03-10 01:05:36.989942 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-10 01:05:36.989948 | orchestrator | Tuesday 10 March 2026 01:02:42 +0000 (0:00:00.154) 0:00:03.487 ********* 2026-03-10 01:05:36.989955 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.989961 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:05:36.989967 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:05:36.989974 | orchestrator | 2026-03-10 01:05:36.989980 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-10 01:05:36.989985 | orchestrator | Tuesday 10 March 2026 01:02:43 +0000 (0:00:00.554) 0:00:04.042 ********* 2026-03-10 01:05:36.989989 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:05:36.989993 | orchestrator | 2026-03-10 01:05:36.990002 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-10 01:05:36.990006 | orchestrator | Tuesday 10 March 2026 01:02:44 +0000 (0:00:01.045) 0:00:05.087 ********* 2026-03-10 01:05:36.990010 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:05:36.990047 | orchestrator | 2026-03-10 01:05:36.990051 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-10 01:05:36.990055 | orchestrator | Tuesday 10 March 2026 01:02:44 +0000 (0:00:00.565) 0:00:05.652 ********* 2026-03-10 01:05:36.990060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.990082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.990088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.990092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990123 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990128 | orchestrator | 2026-03-10 01:05:36.990134 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-10 01:05:36.990142 | orchestrator | Tuesday 10 March 2026 01:02:48 +0000 (0:00:03.711) 0:00:09.364 ********* 2026-03-10 01:05:36.990153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:05:36.990165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:05:36.990172 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:05:36.990179 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.990200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:05:36.990209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:05:36.990215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:05:36.990227 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:05:36.990242 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:05:36.990247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:05:36.990252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:05:36.990256 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:05:36.990261 | orchestrator | 2026-03-10 01:05:36.990265 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-10 01:05:36.990270 | orchestrator | Tuesday 10 March 2026 01:02:49 +0000 (0:00:00.622) 0:00:09.987 ********* 2026-03-10 01:05:36.990288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:05:36.990293 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:05:36.990302 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:05:36.990307 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.990312 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:05:36.990317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:05:36.990327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:05:36.990331 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:05:36.990337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:05:36.990346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:05:36.990351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:05:36.990356 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:05:36.990360 | orchestrator | 2026-03-10 01:05:36.990364 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-10 01:05:36.990439 | orchestrator | Tuesday 10 March 2026 01:02:50 +0000 (0:00:00.849) 0:00:10.837 ********* 2026-03-10 01:05:36.990455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.990473 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.990487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.990494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990502 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990533 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990551 | orchestrator | 2026-03-10 01:05:36.990557 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-10 01:05:36.990563 | orchestrator | Tuesday 10 March 2026 01:02:53 +0000 (0:00:03.563) 0:00:14.401 ********* 2026-03-10 01:05:36.990570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.990576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:05:36.990583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.990598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:05:36.990611 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.990618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:05:36.990624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990630 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.990662 | orchestrator | 2026-03-10 01:05:36.990669 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-10 01:05:36.990675 | orchestrator | Tuesday 10 March 2026 01:03:00 +0000 (0:00:06.458) 0:00:20.859 ********* 2026-03-10 01:05:36.990682 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:05:36.990688 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:05:36.990695 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:05:36.990701 | orchestrator | 2026-03-10 01:05:36.990715 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-10 01:05:36.990722 | orchestrator | Tuesday 10 March 2026 01:03:01 +0000 (0:00:01.694) 0:00:22.554 ********* 2026-03-10 01:05:36.990728 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.990734 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:05:36.990741 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:05:36.990757 | orchestrator | 2026-03-10 01:05:36.990763 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-10 01:05:36.990769 | orchestrator | Tuesday 10 March 2026 01:03:02 +0000 (0:00:00.620) 0:00:23.175 ********* 2026-03-10 01:05:36.990775 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.990782 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:05:36.990788 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:05:36.990794 | orchestrator | 2026-03-10 01:05:36.990800 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-10 01:05:36.990805 | orchestrator | Tuesday 10 March 2026 01:03:02 +0000 (0:00:00.308) 0:00:23.484 ********* 2026-03-10 01:05:36.990812 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.990818 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:05:36.990824 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:05:36.990829 | orchestrator | 2026-03-10 01:05:36.990835 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-10 01:05:36.990841 | orchestrator | Tuesday 10 March 2026 01:03:03 +0000 (0:00:00.576) 0:00:24.060 ********* 2026-03-10 01:05:36.990847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:05:36.990854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:05:36.990861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:05:36.990873 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.990890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:05:36.990897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:05:36.990904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:05:36.990910 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:05:36.990916 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-10 01:05:36.990924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-10 01:05:36.990935 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-10 01:05:36.990941 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:05:36.990947 | orchestrator | 2026-03-10 01:05:36.990953 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-10 01:05:36.990969 | orchestrator | Tuesday 10 March 2026 01:03:04 +0000 (0:00:00.729) 0:00:24.790 ********* 2026-03-10 01:05:36.990975 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.990981 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:05:36.990987 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:05:36.990993 | orchestrator | 2026-03-10 01:05:36.990999 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-10 01:05:36.991011 | orchestrator | Tuesday 10 March 2026 01:03:04 +0000 (0:00:00.334) 0:00:25.124 ********* 2026-03-10 01:05:36.991017 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-10 01:05:36.991024 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-10 01:05:36.991030 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-10 01:05:36.991036 | orchestrator | 2026-03-10 01:05:36.991042 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-10 01:05:36.991049 | orchestrator | Tuesday 10 March 2026 01:03:05 +0000 (0:00:01.632) 0:00:26.757 ********* 2026-03-10 01:05:36.991055 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:05:36.991061 | orchestrator | 2026-03-10 01:05:36.991067 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-10 01:05:36.991073 | orchestrator | Tuesday 10 March 2026 01:03:07 +0000 (0:00:01.277) 0:00:28.034 ********* 2026-03-10 01:05:36.991079 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.991085 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:05:36.991091 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:05:36.991096 | orchestrator | 2026-03-10 01:05:36.991102 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-10 01:05:36.991109 | orchestrator | Tuesday 10 March 2026 01:03:08 +0000 (0:00:01.045) 0:00:29.079 ********* 2026-03-10 01:05:36.991114 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-10 01:05:36.991120 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-10 01:05:36.991126 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:05:36.991132 | orchestrator | 2026-03-10 01:05:36.991138 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-10 01:05:36.991144 | orchestrator | Tuesday 10 March 2026 01:03:09 +0000 (0:00:01.617) 0:00:30.697 ********* 2026-03-10 01:05:36.991151 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:05:36.991157 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:05:36.991163 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:05:36.991169 | orchestrator | 2026-03-10 01:05:36.991175 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-10 01:05:36.991181 | orchestrator | Tuesday 10 March 2026 01:03:10 +0000 (0:00:00.391) 0:00:31.088 ********* 2026-03-10 01:05:36.991187 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-10 01:05:36.991197 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-10 01:05:36.991217 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-10 01:05:36.991223 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-10 01:05:36.991229 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-10 01:05:36.991236 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-10 01:05:36.991242 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-10 01:05:36.991248 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-10 01:05:36.991254 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-10 01:05:36.991261 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-10 01:05:36.991267 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-10 01:05:36.991274 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-10 01:05:36.991281 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-10 01:05:36.991287 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-10 01:05:36.991293 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-10 01:05:36.991300 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-10 01:05:36.991306 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-10 01:05:36.991313 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-10 01:05:36.991319 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-10 01:05:36.991325 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-10 01:05:36.991344 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-10 01:05:36.991350 | orchestrator | 2026-03-10 01:05:36.991354 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-10 01:05:36.991358 | orchestrator | Tuesday 10 March 2026 01:03:19 +0000 (0:00:09.429) 0:00:40.518 ********* 2026-03-10 01:05:36.991362 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-10 01:05:36.991412 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-10 01:05:36.991431 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-10 01:05:36.991440 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-10 01:05:36.991447 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-10 01:05:36.991453 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-10 01:05:36.991459 | orchestrator | 2026-03-10 01:05:36.991465 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-10 01:05:36.991471 | orchestrator | Tuesday 10 March 2026 01:03:22 +0000 (0:00:02.829) 0:00:43.347 ********* 2026-03-10 01:05:36.991478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.991492 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.991500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-10 01:05:36.991515 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:05:36.991523 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:05:36.991536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-10 01:05:36.991543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.991551 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.991556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-10 01:05:36.991559 | orchestrator | 2026-03-10 01:05:36.991563 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-10 01:05:36.991568 | orchestrator | Tuesday 10 March 2026 01:03:24 +0000 (0:00:02.240) 0:00:45.587 ********* 2026-03-10 01:05:36.991571 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.991576 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:05:36.991580 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:05:36.991583 | orchestrator | 2026-03-10 01:05:36.991587 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-10 01:05:36.991591 | orchestrator | Tuesday 10 March 2026 01:03:25 +0000 (0:00:00.288) 0:00:45.876 ********* 2026-03-10 01:05:36.991595 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:05:36.991598 | orchestrator | 2026-03-10 01:05:36.991602 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-10 01:05:36.991606 | orchestrator | Tuesday 10 March 2026 01:03:27 +0000 (0:00:02.364) 0:00:48.240 ********* 2026-03-10 01:05:36.991610 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:05:36.991614 | orchestrator | 2026-03-10 01:05:36.991617 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-10 01:05:36.991621 | orchestrator | Tuesday 10 March 2026 01:03:29 +0000 (0:00:02.278) 0:00:50.519 ********* 2026-03-10 01:05:36.991625 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:05:36.991634 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:05:36.991638 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:05:36.991642 | orchestrator | 2026-03-10 01:05:36.991645 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-10 01:05:36.991655 | orchestrator | Tuesday 10 March 2026 01:03:30 +0000 (0:00:01.110) 0:00:51.630 ********* 2026-03-10 01:05:36.991659 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:05:36.991663 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:05:36.991667 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:05:36.991671 | orchestrator | 2026-03-10 01:05:36.991675 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-10 01:05:36.991679 | orchestrator | Tuesday 10 March 2026 01:03:31 +0000 (0:00:00.345) 0:00:51.975 ********* 2026-03-10 01:05:36.991683 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.991687 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:05:36.991693 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:05:36.991701 | orchestrator | 2026-03-10 01:05:36.991711 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-10 01:05:36.991718 | orchestrator | Tuesday 10 March 2026 01:03:31 +0000 (0:00:00.340) 0:00:52.315 ********* 2026-03-10 01:05:36.991724 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:05:36.991730 | orchestrator | 2026-03-10 01:05:36.991737 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-10 01:05:36.991743 | orchestrator | Tuesday 10 March 2026 01:03:47 +0000 (0:00:15.725) 0:01:08.040 ********* 2026-03-10 01:05:36.991749 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:05:36.991757 | orchestrator | 2026-03-10 01:05:36.991761 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-10 01:05:36.991765 | orchestrator | Tuesday 10 March 2026 01:03:59 +0000 (0:00:11.965) 0:01:20.006 ********* 2026-03-10 01:05:36.991769 | orchestrator | 2026-03-10 01:05:36.991772 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-10 01:05:36.991776 | orchestrator | Tuesday 10 March 2026 01:03:59 +0000 (0:00:00.069) 0:01:20.076 ********* 2026-03-10 01:05:36.991780 | orchestrator | 2026-03-10 01:05:36.991784 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-10 01:05:36.991788 | orchestrator | Tuesday 10 March 2026 01:03:59 +0000 (0:00:00.066) 0:01:20.143 ********* 2026-03-10 01:05:36.991791 | orchestrator | 2026-03-10 01:05:36.991795 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-10 01:05:36.991799 | orchestrator | Tuesday 10 March 2026 01:03:59 +0000 (0:00:00.071) 0:01:20.214 ********* 2026-03-10 01:05:36.991803 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:05:36.991806 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:05:36.991810 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:05:36.991814 | orchestrator | 2026-03-10 01:05:36.991818 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-10 01:05:36.991822 | orchestrator | Tuesday 10 March 2026 01:04:22 +0000 (0:00:23.486) 0:01:43.700 ********* 2026-03-10 01:05:36.991826 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:05:36.991830 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:05:36.991834 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:05:36.991838 | orchestrator | 2026-03-10 01:05:36.991842 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-10 01:05:36.991845 | orchestrator | Tuesday 10 March 2026 01:04:28 +0000 (0:00:05.355) 0:01:49.056 ********* 2026-03-10 01:05:36.991849 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:05:36.991853 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:05:36.991857 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:05:36.991862 | orchestrator | 2026-03-10 01:05:36.991868 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-10 01:05:36.991874 | orchestrator | Tuesday 10 March 2026 01:04:39 +0000 (0:00:11.563) 0:02:00.619 ********* 2026-03-10 01:05:36.991880 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:05:36.991892 | orchestrator | 2026-03-10 01:05:36.991898 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-10 01:05:36.991904 | orchestrator | Tuesday 10 March 2026 01:04:40 +0000 (0:00:00.839) 0:02:01.458 ********* 2026-03-10 01:05:36.991909 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:05:36.991915 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:05:36.991921 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:05:36.991926 | orchestrator | 2026-03-10 01:05:36.991932 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-10 01:05:36.991938 | orchestrator | Tuesday 10 March 2026 01:04:41 +0000 (0:00:00.879) 0:02:02.338 ********* 2026-03-10 01:05:36.991945 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:05:36.991950 | orchestrator | 2026-03-10 01:05:36.991956 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-10 01:05:36.991962 | orchestrator | Tuesday 10 March 2026 01:04:43 +0000 (0:00:01.846) 0:02:04.185 ********* 2026-03-10 01:05:36.991967 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-10 01:05:36.991973 | orchestrator | 2026-03-10 01:05:36.991980 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-10 01:05:36.991987 | orchestrator | Tuesday 10 March 2026 01:04:55 +0000 (0:00:12.317) 0:02:16.502 ********* 2026-03-10 01:05:36.991994 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-10 01:05:36.992001 | orchestrator | 2026-03-10 01:05:36.992006 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-10 01:05:36.992013 | orchestrator | Tuesday 10 March 2026 01:05:21 +0000 (0:00:25.602) 0:02:42.105 ********* 2026-03-10 01:05:36.992019 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-10 01:05:36.992026 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-10 01:05:36.992032 | orchestrator | 2026-03-10 01:05:36.992038 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-10 01:05:36.992044 | orchestrator | Tuesday 10 March 2026 01:05:28 +0000 (0:00:07.023) 0:02:49.128 ********* 2026-03-10 01:05:36.992050 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.992056 | orchestrator | 2026-03-10 01:05:36.992062 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-10 01:05:36.992068 | orchestrator | Tuesday 10 March 2026 01:05:28 +0000 (0:00:00.168) 0:02:49.296 ********* 2026-03-10 01:05:36.992086 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.992093 | orchestrator | 2026-03-10 01:05:36.992098 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-10 01:05:36.992104 | orchestrator | Tuesday 10 March 2026 01:05:28 +0000 (0:00:00.124) 0:02:49.421 ********* 2026-03-10 01:05:36.992110 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.992116 | orchestrator | 2026-03-10 01:05:36.992123 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-10 01:05:36.992128 | orchestrator | Tuesday 10 March 2026 01:05:28 +0000 (0:00:00.188) 0:02:49.610 ********* 2026-03-10 01:05:36.992134 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.992140 | orchestrator | 2026-03-10 01:05:36.992146 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-10 01:05:36.992152 | orchestrator | Tuesday 10 March 2026 01:05:29 +0000 (0:00:00.805) 0:02:50.415 ********* 2026-03-10 01:05:36.992158 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:05:36.992164 | orchestrator | 2026-03-10 01:05:36.992170 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-10 01:05:36.992176 | orchestrator | Tuesday 10 March 2026 01:05:32 +0000 (0:00:03.239) 0:02:53.655 ********* 2026-03-10 01:05:36.992182 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:05:36.992188 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:05:36.992195 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:05:36.992228 | orchestrator | 2026-03-10 01:05:36.992235 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:05:36.992249 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-10 01:05:36.992257 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-10 01:05:36.992264 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-10 01:05:36.992270 | orchestrator | 2026-03-10 01:05:36.992276 | orchestrator | 2026-03-10 01:05:36.992282 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:05:36.992289 | orchestrator | Tuesday 10 March 2026 01:05:33 +0000 (0:00:00.585) 0:02:54.241 ********* 2026-03-10 01:05:36.992295 | orchestrator | =============================================================================== 2026-03-10 01:05:36.992301 | orchestrator | service-ks-register : keystone | Creating services --------------------- 25.60s 2026-03-10 01:05:36.992308 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 23.49s 2026-03-10 01:05:36.992314 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 15.73s 2026-03-10 01:05:36.992320 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 12.32s 2026-03-10 01:05:36.992324 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 11.97s 2026-03-10 01:05:36.992328 | orchestrator | keystone : Restart keystone container ---------------------------------- 11.56s 2026-03-10 01:05:36.992332 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.43s 2026-03-10 01:05:36.992335 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.02s 2026-03-10 01:05:36.992339 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 6.46s 2026-03-10 01:05:36.992343 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.36s 2026-03-10 01:05:36.992347 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.71s 2026-03-10 01:05:36.992351 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.56s 2026-03-10 01:05:36.992355 | orchestrator | keystone : Creating default user role ----------------------------------- 3.24s 2026-03-10 01:05:36.992359 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.83s 2026-03-10 01:05:36.992363 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.36s 2026-03-10 01:05:36.992392 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.28s 2026-03-10 01:05:36.992396 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.24s 2026-03-10 01:05:36.992400 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.85s 2026-03-10 01:05:36.992404 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.84s 2026-03-10 01:05:36.992408 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 1.69s 2026-03-10 01:05:36.992412 | orchestrator | 2026-03-10 01:05:36 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:05:36.992416 | orchestrator | 2026-03-10 01:05:36 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:05:36.992420 | orchestrator | 2026-03-10 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:40.028643 | orchestrator | 2026-03-10 01:05:40 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:05:40.028823 | orchestrator | 2026-03-10 01:05:40 | INFO  | Task 89cb964f-d603-48c9-94cb-b17f62caaafb is in state STARTED 2026-03-10 01:05:40.030230 | orchestrator | 2026-03-10 01:05:40 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:05:40.031023 | orchestrator | 2026-03-10 01:05:40 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:05:40.031763 | orchestrator | 2026-03-10 01:05:40 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:05:40.031792 | orchestrator | 2026-03-10 01:05:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:43.076685 | orchestrator | 2026-03-10 01:05:43 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:05:43.077791 | orchestrator | 2026-03-10 01:05:43 | INFO  | Task 89cb964f-d603-48c9-94cb-b17f62caaafb is in state SUCCESS 2026-03-10 01:05:43.077957 | orchestrator | 2026-03-10 01:05:43 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:05:43.079294 | orchestrator | 2026-03-10 01:05:43 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:05:43.085061 | orchestrator | 2026-03-10 01:05:43 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:05:43.086054 | orchestrator | 2026-03-10 01:05:43 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:05:43.086090 | orchestrator | 2026-03-10 01:05:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:46.188941 | orchestrator | 2026-03-10 01:05:46 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:05:46.189459 | orchestrator | 2026-03-10 01:05:46 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:05:46.194937 | orchestrator | 2026-03-10 01:05:46 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:05:46.259689 | orchestrator | 2026-03-10 01:05:46 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:05:46.259781 | orchestrator | 2026-03-10 01:05:46 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:05:46.259796 | orchestrator | 2026-03-10 01:05:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:49.248486 | orchestrator | 2026-03-10 01:05:49 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:05:49.248567 | orchestrator | 2026-03-10 01:05:49 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:05:49.248576 | orchestrator | 2026-03-10 01:05:49 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:05:49.248583 | orchestrator | 2026-03-10 01:05:49 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:05:49.248589 | orchestrator | 2026-03-10 01:05:49 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:05:49.248596 | orchestrator | 2026-03-10 01:05:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:52.288120 | orchestrator | 2026-03-10 01:05:52 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:05:52.290263 | orchestrator | 2026-03-10 01:05:52 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:05:52.291948 | orchestrator | 2026-03-10 01:05:52 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:05:52.293697 | orchestrator | 2026-03-10 01:05:52 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:05:52.294906 | orchestrator | 2026-03-10 01:05:52 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:05:52.294950 | orchestrator | 2026-03-10 01:05:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:55.348863 | orchestrator | 2026-03-10 01:05:55 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:05:55.349817 | orchestrator | 2026-03-10 01:05:55 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:05:55.350709 | orchestrator | 2026-03-10 01:05:55 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:05:55.353280 | orchestrator | 2026-03-10 01:05:55 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:05:55.354435 | orchestrator | 2026-03-10 01:05:55 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:05:55.354476 | orchestrator | 2026-03-10 01:05:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:05:58.393822 | orchestrator | 2026-03-10 01:05:58 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:05:58.393954 | orchestrator | 2026-03-10 01:05:58 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:05:58.394530 | orchestrator | 2026-03-10 01:05:58 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:05:58.396512 | orchestrator | 2026-03-10 01:05:58 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:05:58.397459 | orchestrator | 2026-03-10 01:05:58 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:05:58.397498 | orchestrator | 2026-03-10 01:05:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:01.450668 | orchestrator | 2026-03-10 01:06:01 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:01.450757 | orchestrator | 2026-03-10 01:06:01 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:01.451348 | orchestrator | 2026-03-10 01:06:01 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:01.454349 | orchestrator | 2026-03-10 01:06:01 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:01.454964 | orchestrator | 2026-03-10 01:06:01 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:01.455032 | orchestrator | 2026-03-10 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:04.490693 | orchestrator | 2026-03-10 01:06:04 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:04.491002 | orchestrator | 2026-03-10 01:06:04 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:04.492137 | orchestrator | 2026-03-10 01:06:04 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:04.493855 | orchestrator | 2026-03-10 01:06:04 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:04.497600 | orchestrator | 2026-03-10 01:06:04 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:04.497645 | orchestrator | 2026-03-10 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:07.545258 | orchestrator | 2026-03-10 01:06:07 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:07.545405 | orchestrator | 2026-03-10 01:06:07 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:07.546673 | orchestrator | 2026-03-10 01:06:07 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:07.549454 | orchestrator | 2026-03-10 01:06:07 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:07.550899 | orchestrator | 2026-03-10 01:06:07 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:07.550996 | orchestrator | 2026-03-10 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:10.630819 | orchestrator | 2026-03-10 01:06:10 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:10.630905 | orchestrator | 2026-03-10 01:06:10 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:10.630914 | orchestrator | 2026-03-10 01:06:10 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:10.630920 | orchestrator | 2026-03-10 01:06:10 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:10.630926 | orchestrator | 2026-03-10 01:06:10 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:10.630933 | orchestrator | 2026-03-10 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:13.632896 | orchestrator | 2026-03-10 01:06:13 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:13.635934 | orchestrator | 2026-03-10 01:06:13 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:13.637187 | orchestrator | 2026-03-10 01:06:13 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:13.638705 | orchestrator | 2026-03-10 01:06:13 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:13.641603 | orchestrator | 2026-03-10 01:06:13 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:13.642703 | orchestrator | 2026-03-10 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:16.710687 | orchestrator | 2026-03-10 01:06:16 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:16.711747 | orchestrator | 2026-03-10 01:06:16 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:16.715713 | orchestrator | 2026-03-10 01:06:16 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:16.716761 | orchestrator | 2026-03-10 01:06:16 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:16.717576 | orchestrator | 2026-03-10 01:06:16 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:16.717620 | orchestrator | 2026-03-10 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:19.753729 | orchestrator | 2026-03-10 01:06:19 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:19.755600 | orchestrator | 2026-03-10 01:06:19 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:19.756659 | orchestrator | 2026-03-10 01:06:19 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:19.758348 | orchestrator | 2026-03-10 01:06:19 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:19.759527 | orchestrator | 2026-03-10 01:06:19 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:19.759866 | orchestrator | 2026-03-10 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:22.842156 | orchestrator | 2026-03-10 01:06:22 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:22.843524 | orchestrator | 2026-03-10 01:06:22 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:22.844597 | orchestrator | 2026-03-10 01:06:22 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:22.846112 | orchestrator | 2026-03-10 01:06:22 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:22.847510 | orchestrator | 2026-03-10 01:06:22 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:22.847690 | orchestrator | 2026-03-10 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:25.883832 | orchestrator | 2026-03-10 01:06:25 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:25.885211 | orchestrator | 2026-03-10 01:06:25 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:25.887587 | orchestrator | 2026-03-10 01:06:25 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:25.888425 | orchestrator | 2026-03-10 01:06:25 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:25.890588 | orchestrator | 2026-03-10 01:06:25 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:25.890630 | orchestrator | 2026-03-10 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:28.925397 | orchestrator | 2026-03-10 01:06:28 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:28.927011 | orchestrator | 2026-03-10 01:06:28 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:28.927833 | orchestrator | 2026-03-10 01:06:28 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:28.928875 | orchestrator | 2026-03-10 01:06:28 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:28.929802 | orchestrator | 2026-03-10 01:06:28 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:28.929850 | orchestrator | 2026-03-10 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:32.003864 | orchestrator | 2026-03-10 01:06:32 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:32.005316 | orchestrator | 2026-03-10 01:06:32 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:32.006945 | orchestrator | 2026-03-10 01:06:32 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:32.008763 | orchestrator | 2026-03-10 01:06:32 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:32.010860 | orchestrator | 2026-03-10 01:06:32 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:32.010916 | orchestrator | 2026-03-10 01:06:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:35.056330 | orchestrator | 2026-03-10 01:06:35 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:35.056782 | orchestrator | 2026-03-10 01:06:35 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:35.058063 | orchestrator | 2026-03-10 01:06:35 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:35.061721 | orchestrator | 2026-03-10 01:06:35 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:35.062384 | orchestrator | 2026-03-10 01:06:35 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:35.062420 | orchestrator | 2026-03-10 01:06:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:38.105561 | orchestrator | 2026-03-10 01:06:38 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:38.105650 | orchestrator | 2026-03-10 01:06:38 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:38.105680 | orchestrator | 2026-03-10 01:06:38 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:38.108295 | orchestrator | 2026-03-10 01:06:38 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:38.108381 | orchestrator | 2026-03-10 01:06:38 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:38.108388 | orchestrator | 2026-03-10 01:06:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:41.161035 | orchestrator | 2026-03-10 01:06:41 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:41.162767 | orchestrator | 2026-03-10 01:06:41 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:41.163817 | orchestrator | 2026-03-10 01:06:41 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:41.165271 | orchestrator | 2026-03-10 01:06:41 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:41.166487 | orchestrator | 2026-03-10 01:06:41 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:41.166881 | orchestrator | 2026-03-10 01:06:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:44.211264 | orchestrator | 2026-03-10 01:06:44 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:44.212448 | orchestrator | 2026-03-10 01:06:44 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:44.217085 | orchestrator | 2026-03-10 01:06:44 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:44.218259 | orchestrator | 2026-03-10 01:06:44 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:44.220522 | orchestrator | 2026-03-10 01:06:44 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:44.220593 | orchestrator | 2026-03-10 01:06:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:47.256733 | orchestrator | 2026-03-10 01:06:47 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:47.258318 | orchestrator | 2026-03-10 01:06:47 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:47.259412 | orchestrator | 2026-03-10 01:06:47 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:47.260336 | orchestrator | 2026-03-10 01:06:47 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:47.261824 | orchestrator | 2026-03-10 01:06:47 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:47.262048 | orchestrator | 2026-03-10 01:06:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:50.294116 | orchestrator | 2026-03-10 01:06:50 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:50.294832 | orchestrator | 2026-03-10 01:06:50 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:50.295951 | orchestrator | 2026-03-10 01:06:50 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:50.297282 | orchestrator | 2026-03-10 01:06:50 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:50.298155 | orchestrator | 2026-03-10 01:06:50 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:50.298221 | orchestrator | 2026-03-10 01:06:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:53.343828 | orchestrator | 2026-03-10 01:06:53 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:53.344717 | orchestrator | 2026-03-10 01:06:53 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:53.345568 | orchestrator | 2026-03-10 01:06:53 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:53.347844 | orchestrator | 2026-03-10 01:06:53 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:53.350811 | orchestrator | 2026-03-10 01:06:53 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:53.350851 | orchestrator | 2026-03-10 01:06:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:56.387005 | orchestrator | 2026-03-10 01:06:56 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:56.387437 | orchestrator | 2026-03-10 01:06:56 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:56.390582 | orchestrator | 2026-03-10 01:06:56 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:56.390647 | orchestrator | 2026-03-10 01:06:56 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:56.391609 | orchestrator | 2026-03-10 01:06:56 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:56.391868 | orchestrator | 2026-03-10 01:06:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:06:59.437977 | orchestrator | 2026-03-10 01:06:59 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:06:59.438472 | orchestrator | 2026-03-10 01:06:59 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:06:59.439890 | orchestrator | 2026-03-10 01:06:59 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:06:59.440734 | orchestrator | 2026-03-10 01:06:59 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:06:59.441698 | orchestrator | 2026-03-10 01:06:59 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:06:59.441732 | orchestrator | 2026-03-10 01:06:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:02.473637 | orchestrator | 2026-03-10 01:07:02 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:02.474394 | orchestrator | 2026-03-10 01:07:02 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:02.475455 | orchestrator | 2026-03-10 01:07:02 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:07:02.477383 | orchestrator | 2026-03-10 01:07:02 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:02.478539 | orchestrator | 2026-03-10 01:07:02 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:02.478571 | orchestrator | 2026-03-10 01:07:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:05.518833 | orchestrator | 2026-03-10 01:07:05 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:05.526774 | orchestrator | 2026-03-10 01:07:05 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:05.527724 | orchestrator | 2026-03-10 01:07:05 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state STARTED 2026-03-10 01:07:05.528434 | orchestrator | 2026-03-10 01:07:05 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:05.529794 | orchestrator | 2026-03-10 01:07:05 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:05.529851 | orchestrator | 2026-03-10 01:07:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:08.570456 | orchestrator | 2026-03-10 01:07:08 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:08.570733 | orchestrator | 2026-03-10 01:07:08 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:08.571442 | orchestrator | 2026-03-10 01:07:08 | INFO  | Task 4257c6cf-32a0-490b-9803-c64ea795ab77 is in state SUCCESS 2026-03-10 01:07:08.572574 | orchestrator | 2026-03-10 01:07:08 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:08.575919 | orchestrator | 2026-03-10 01:07:08 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:08.576005 | orchestrator | 2026-03-10 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:11.611260 | orchestrator | 2026-03-10 01:07:11 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:11.611614 | orchestrator | 2026-03-10 01:07:11 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:11.612656 | orchestrator | 2026-03-10 01:07:11 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:11.613576 | orchestrator | 2026-03-10 01:07:11 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:11.613595 | orchestrator | 2026-03-10 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:14.654901 | orchestrator | 2026-03-10 01:07:14 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:14.655259 | orchestrator | 2026-03-10 01:07:14 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:14.656122 | orchestrator | 2026-03-10 01:07:14 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:14.656868 | orchestrator | 2026-03-10 01:07:14 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:14.656904 | orchestrator | 2026-03-10 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:17.694317 | orchestrator | 2026-03-10 01:07:17 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:17.694437 | orchestrator | 2026-03-10 01:07:17 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:17.695069 | orchestrator | 2026-03-10 01:07:17 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:17.697178 | orchestrator | 2026-03-10 01:07:17 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:17.697215 | orchestrator | 2026-03-10 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:20.723663 | orchestrator | 2026-03-10 01:07:20 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:20.723892 | orchestrator | 2026-03-10 01:07:20 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:20.725709 | orchestrator | 2026-03-10 01:07:20 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:20.726174 | orchestrator | 2026-03-10 01:07:20 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:20.726198 | orchestrator | 2026-03-10 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:23.834678 | orchestrator | 2026-03-10 01:07:23 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:23.835150 | orchestrator | 2026-03-10 01:07:23 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:23.835750 | orchestrator | 2026-03-10 01:07:23 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:23.836500 | orchestrator | 2026-03-10 01:07:23 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:23.836534 | orchestrator | 2026-03-10 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:27.088413 | orchestrator | 2026-03-10 01:07:27 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:27.090749 | orchestrator | 2026-03-10 01:07:27 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:27.092248 | orchestrator | 2026-03-10 01:07:27 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:27.093409 | orchestrator | 2026-03-10 01:07:27 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:27.093445 | orchestrator | 2026-03-10 01:07:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:30.139662 | orchestrator | 2026-03-10 01:07:30 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:30.140316 | orchestrator | 2026-03-10 01:07:30 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:30.140925 | orchestrator | 2026-03-10 01:07:30 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:30.141713 | orchestrator | 2026-03-10 01:07:30 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:30.141755 | orchestrator | 2026-03-10 01:07:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:33.186976 | orchestrator | 2026-03-10 01:07:33 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:33.187980 | orchestrator | 2026-03-10 01:07:33 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:33.189169 | orchestrator | 2026-03-10 01:07:33 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:33.190669 | orchestrator | 2026-03-10 01:07:33 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:33.190708 | orchestrator | 2026-03-10 01:07:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:36.228910 | orchestrator | 2026-03-10 01:07:36 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:36.229144 | orchestrator | 2026-03-10 01:07:36 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:36.230473 | orchestrator | 2026-03-10 01:07:36 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:36.231430 | orchestrator | 2026-03-10 01:07:36 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:36.231516 | orchestrator | 2026-03-10 01:07:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:39.306098 | orchestrator | 2026-03-10 01:07:39 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:39.306200 | orchestrator | 2026-03-10 01:07:39 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:39.306215 | orchestrator | 2026-03-10 01:07:39 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:39.306227 | orchestrator | 2026-03-10 01:07:39 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:39.306271 | orchestrator | 2026-03-10 01:07:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:42.295631 | orchestrator | 2026-03-10 01:07:42 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:42.296857 | orchestrator | 2026-03-10 01:07:42 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:42.297553 | orchestrator | 2026-03-10 01:07:42 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:42.298461 | orchestrator | 2026-03-10 01:07:42 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:42.298498 | orchestrator | 2026-03-10 01:07:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:45.335638 | orchestrator | 2026-03-10 01:07:45 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:45.337109 | orchestrator | 2026-03-10 01:07:45 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:45.339476 | orchestrator | 2026-03-10 01:07:45 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:45.345235 | orchestrator | 2026-03-10 01:07:45 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:45.345317 | orchestrator | 2026-03-10 01:07:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:48.366497 | orchestrator | 2026-03-10 01:07:48 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:48.366970 | orchestrator | 2026-03-10 01:07:48 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:48.367838 | orchestrator | 2026-03-10 01:07:48 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:48.368648 | orchestrator | 2026-03-10 01:07:48 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:48.368678 | orchestrator | 2026-03-10 01:07:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:51.404864 | orchestrator | 2026-03-10 01:07:51 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:51.406236 | orchestrator | 2026-03-10 01:07:51 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:51.407397 | orchestrator | 2026-03-10 01:07:51 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:51.408810 | orchestrator | 2026-03-10 01:07:51 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:51.408840 | orchestrator | 2026-03-10 01:07:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:54.447142 | orchestrator | 2026-03-10 01:07:54 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:54.448593 | orchestrator | 2026-03-10 01:07:54 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:54.449349 | orchestrator | 2026-03-10 01:07:54 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:54.450284 | orchestrator | 2026-03-10 01:07:54 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:54.450316 | orchestrator | 2026-03-10 01:07:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:07:57.481680 | orchestrator | 2026-03-10 01:07:57 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:07:57.482758 | orchestrator | 2026-03-10 01:07:57 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:07:57.484463 | orchestrator | 2026-03-10 01:07:57 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:07:57.486246 | orchestrator | 2026-03-10 01:07:57 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:07:57.486679 | orchestrator | 2026-03-10 01:07:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:00.520182 | orchestrator | 2026-03-10 01:08:00 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:00.522462 | orchestrator | 2026-03-10 01:08:00 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:08:00.524126 | orchestrator | 2026-03-10 01:08:00 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:00.526423 | orchestrator | 2026-03-10 01:08:00 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:00.526480 | orchestrator | 2026-03-10 01:08:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:03.561928 | orchestrator | 2026-03-10 01:08:03 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:03.562777 | orchestrator | 2026-03-10 01:08:03 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:08:03.565037 | orchestrator | 2026-03-10 01:08:03 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:03.567067 | orchestrator | 2026-03-10 01:08:03 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:03.567168 | orchestrator | 2026-03-10 01:08:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:06.629464 | orchestrator | 2026-03-10 01:08:06 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:06.629567 | orchestrator | 2026-03-10 01:08:06 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:08:06.629586 | orchestrator | 2026-03-10 01:08:06 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:06.629595 | orchestrator | 2026-03-10 01:08:06 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:06.629603 | orchestrator | 2026-03-10 01:08:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:09.689314 | orchestrator | 2026-03-10 01:08:09 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:09.689480 | orchestrator | 2026-03-10 01:08:09 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state STARTED 2026-03-10 01:08:09.690601 | orchestrator | 2026-03-10 01:08:09 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:09.692241 | orchestrator | 2026-03-10 01:08:09 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:09.692714 | orchestrator | 2026-03-10 01:08:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:12.735738 | orchestrator | 2026-03-10 01:08:12 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:12.736618 | orchestrator | 2026-03-10 01:08:12 | INFO  | Task 899e05d1-48cc-48d6-b131-33cb9f6a3516 is in state SUCCESS 2026-03-10 01:08:12.737891 | orchestrator | 2026-03-10 01:08:12.737989 | orchestrator | 2026-03-10 01:08:12.738195 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:08:12.738207 | orchestrator | 2026-03-10 01:08:12.738217 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:08:12.738227 | orchestrator | Tuesday 10 March 2026 01:05:37 +0000 (0:00:00.140) 0:00:00.140 ********* 2026-03-10 01:08:12.738237 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:08:12.738270 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:08:12.738281 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:08:12.738290 | orchestrator | 2026-03-10 01:08:12.738300 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:08:12.738344 | orchestrator | Tuesday 10 March 2026 01:05:38 +0000 (0:00:00.383) 0:00:00.524 ********* 2026-03-10 01:08:12.738356 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-10 01:08:12.738366 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-10 01:08:12.738376 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-10 01:08:12.738385 | orchestrator | 2026-03-10 01:08:12.738395 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-10 01:08:12.738405 | orchestrator | 2026-03-10 01:08:12.738414 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-10 01:08:12.738424 | orchestrator | Tuesday 10 March 2026 01:05:38 +0000 (0:00:00.835) 0:00:01.359 ********* 2026-03-10 01:08:12.738434 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:08:12.738443 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:08:12.738453 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:08:12.738462 | orchestrator | 2026-03-10 01:08:12.738472 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:08:12.738482 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:08:12.738493 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:08:12.738503 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:08:12.738513 | orchestrator | 2026-03-10 01:08:12.738523 | orchestrator | 2026-03-10 01:08:12.738532 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:08:12.738542 | orchestrator | Tuesday 10 March 2026 01:05:39 +0000 (0:00:00.927) 0:00:02.287 ********* 2026-03-10 01:08:12.738552 | orchestrator | =============================================================================== 2026-03-10 01:08:12.738562 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.93s 2026-03-10 01:08:12.738571 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2026-03-10 01:08:12.738581 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2026-03-10 01:08:12.738591 | orchestrator | 2026-03-10 01:08:12.738600 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-10 01:08:12.738610 | orchestrator | 2.16.14 2026-03-10 01:08:12.738620 | orchestrator | 2026-03-10 01:08:12.738630 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-10 01:08:12.738639 | orchestrator | 2026-03-10 01:08:12.738649 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-10 01:08:12.738658 | orchestrator | Tuesday 10 March 2026 01:05:37 +0000 (0:00:00.359) 0:00:00.359 ********* 2026-03-10 01:08:12.738668 | orchestrator | changed: [testbed-manager] 2026-03-10 01:08:12.738679 | orchestrator | 2026-03-10 01:08:12.738688 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-10 01:08:12.738698 | orchestrator | Tuesday 10 March 2026 01:05:39 +0000 (0:00:01.903) 0:00:02.263 ********* 2026-03-10 01:08:12.738707 | orchestrator | changed: [testbed-manager] 2026-03-10 01:08:12.738717 | orchestrator | 2026-03-10 01:08:12.738726 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-10 01:08:12.738736 | orchestrator | Tuesday 10 March 2026 01:05:40 +0000 (0:00:01.204) 0:00:03.467 ********* 2026-03-10 01:08:12.738745 | orchestrator | changed: [testbed-manager] 2026-03-10 01:08:12.738755 | orchestrator | 2026-03-10 01:08:12.738764 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-10 01:08:12.738774 | orchestrator | Tuesday 10 March 2026 01:05:41 +0000 (0:00:00.910) 0:00:04.377 ********* 2026-03-10 01:08:12.738791 | orchestrator | changed: [testbed-manager] 2026-03-10 01:08:12.738800 | orchestrator | 2026-03-10 01:08:12.738810 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-10 01:08:12.738819 | orchestrator | Tuesday 10 March 2026 01:05:42 +0000 (0:00:01.128) 0:00:05.506 ********* 2026-03-10 01:08:12.738829 | orchestrator | changed: [testbed-manager] 2026-03-10 01:08:12.738838 | orchestrator | 2026-03-10 01:08:12.738848 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-10 01:08:12.738858 | orchestrator | Tuesday 10 March 2026 01:05:43 +0000 (0:00:01.113) 0:00:06.619 ********* 2026-03-10 01:08:12.738868 | orchestrator | changed: [testbed-manager] 2026-03-10 01:08:12.738877 | orchestrator | 2026-03-10 01:08:12.738887 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-10 01:08:12.738896 | orchestrator | Tuesday 10 March 2026 01:05:44 +0000 (0:00:01.203) 0:00:07.823 ********* 2026-03-10 01:08:12.738906 | orchestrator | changed: [testbed-manager] 2026-03-10 01:08:12.738915 | orchestrator | 2026-03-10 01:08:12.738926 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-10 01:08:12.738935 | orchestrator | Tuesday 10 March 2026 01:05:47 +0000 (0:00:02.198) 0:00:10.022 ********* 2026-03-10 01:08:12.738945 | orchestrator | changed: [testbed-manager] 2026-03-10 01:08:12.738954 | orchestrator | 2026-03-10 01:08:12.738964 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-10 01:08:12.738974 | orchestrator | Tuesday 10 March 2026 01:05:48 +0000 (0:00:01.424) 0:00:11.447 ********* 2026-03-10 01:08:12.738984 | orchestrator | changed: [testbed-manager] 2026-03-10 01:08:12.738993 | orchestrator | 2026-03-10 01:08:12.739016 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-10 01:08:12.739027 | orchestrator | Tuesday 10 March 2026 01:06:43 +0000 (0:00:54.832) 0:01:06.279 ********* 2026-03-10 01:08:12.739036 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:08:12.739046 | orchestrator | 2026-03-10 01:08:12.739056 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-10 01:08:12.739065 | orchestrator | 2026-03-10 01:08:12.739075 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-10 01:08:12.739085 | orchestrator | Tuesday 10 March 2026 01:06:43 +0000 (0:00:00.210) 0:01:06.490 ********* 2026-03-10 01:08:12.739094 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:12.739104 | orchestrator | 2026-03-10 01:08:12.739117 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-10 01:08:12.739127 | orchestrator | 2026-03-10 01:08:12.739137 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-10 01:08:12.739147 | orchestrator | Tuesday 10 March 2026 01:06:55 +0000 (0:00:11.593) 0:01:18.083 ********* 2026-03-10 01:08:12.739156 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:08:12.739166 | orchestrator | 2026-03-10 01:08:12.739175 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-10 01:08:12.739185 | orchestrator | 2026-03-10 01:08:12.739195 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-10 01:08:12.739204 | orchestrator | Tuesday 10 March 2026 01:06:56 +0000 (0:00:01.143) 0:01:19.227 ********* 2026-03-10 01:08:12.739214 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:08:12.739223 | orchestrator | 2026-03-10 01:08:12.739233 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:08:12.739243 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-10 01:08:12.739254 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:08:12.739264 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:08:12.739274 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:08:12.739289 | orchestrator | 2026-03-10 01:08:12.739299 | orchestrator | 2026-03-10 01:08:12.739309 | orchestrator | 2026-03-10 01:08:12.739349 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:08:12.739370 | orchestrator | Tuesday 10 March 2026 01:07:07 +0000 (0:00:11.129) 0:01:30.356 ********* 2026-03-10 01:08:12.739381 | orchestrator | =============================================================================== 2026-03-10 01:08:12.739391 | orchestrator | Create admin user ------------------------------------------------------ 54.83s 2026-03-10 01:08:12.739400 | orchestrator | Restart ceph manager service ------------------------------------------- 23.87s 2026-03-10 01:08:12.739410 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.20s 2026-03-10 01:08:12.739419 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.90s 2026-03-10 01:08:12.739429 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.42s 2026-03-10 01:08:12.739439 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 1.20s 2026-03-10 01:08:12.739448 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.20s 2026-03-10 01:08:12.739458 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.13s 2026-03-10 01:08:12.739467 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 1.11s 2026-03-10 01:08:12.739477 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.91s 2026-03-10 01:08:12.739486 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.21s 2026-03-10 01:08:12.739496 | orchestrator | 2026-03-10 01:08:12.739505 | orchestrator | 2026-03-10 01:08:12.739515 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:08:12.739525 | orchestrator | 2026-03-10 01:08:12.739534 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:08:12.739544 | orchestrator | Tuesday 10 March 2026 01:05:41 +0000 (0:00:00.267) 0:00:00.267 ********* 2026-03-10 01:08:12.739553 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:08:12.739563 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:08:12.739590 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:08:12.739600 | orchestrator | 2026-03-10 01:08:12.739610 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:08:12.739619 | orchestrator | Tuesday 10 March 2026 01:05:41 +0000 (0:00:00.415) 0:00:00.682 ********* 2026-03-10 01:08:12.739629 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-10 01:08:12.739639 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-10 01:08:12.739648 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-10 01:08:12.739658 | orchestrator | 2026-03-10 01:08:12.739668 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-10 01:08:12.739678 | orchestrator | 2026-03-10 01:08:12.739687 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-10 01:08:12.739697 | orchestrator | Tuesday 10 March 2026 01:05:42 +0000 (0:00:00.681) 0:00:01.364 ********* 2026-03-10 01:08:12.739707 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:08:12.739716 | orchestrator | 2026-03-10 01:08:12.739726 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-10 01:08:12.739736 | orchestrator | Tuesday 10 March 2026 01:05:43 +0000 (0:00:00.847) 0:00:02.211 ********* 2026-03-10 01:08:12.739745 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-10 01:08:12.739755 | orchestrator | 2026-03-10 01:08:12.739772 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-10 01:08:12.739782 | orchestrator | Tuesday 10 March 2026 01:05:47 +0000 (0:00:04.522) 0:00:06.734 ********* 2026-03-10 01:08:12.739792 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-10 01:08:12.739809 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-10 01:08:12.739819 | orchestrator | 2026-03-10 01:08:12.739828 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-10 01:08:12.739844 | orchestrator | Tuesday 10 March 2026 01:05:55 +0000 (0:00:07.267) 0:00:14.001 ********* 2026-03-10 01:08:12.739854 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-10 01:08:12.739864 | orchestrator | 2026-03-10 01:08:12.739874 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-10 01:08:12.739883 | orchestrator | Tuesday 10 March 2026 01:05:59 +0000 (0:00:03.774) 0:00:17.776 ********* 2026-03-10 01:08:12.739893 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-10 01:08:12.739902 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:08:12.739912 | orchestrator | 2026-03-10 01:08:12.739922 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-10 01:08:12.739931 | orchestrator | Tuesday 10 March 2026 01:06:03 +0000 (0:00:03.975) 0:00:21.751 ********* 2026-03-10 01:08:12.739941 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:08:12.739950 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-10 01:08:12.739960 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-10 01:08:12.739970 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-10 01:08:12.739980 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-10 01:08:12.739989 | orchestrator | 2026-03-10 01:08:12.739999 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-10 01:08:12.740009 | orchestrator | Tuesday 10 March 2026 01:06:20 +0000 (0:00:17.318) 0:00:39.069 ********* 2026-03-10 01:08:12.740018 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-10 01:08:12.740028 | orchestrator | 2026-03-10 01:08:12.740037 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-10 01:08:12.740047 | orchestrator | Tuesday 10 March 2026 01:06:25 +0000 (0:00:04.709) 0:00:43.779 ********* 2026-03-10 01:08:12.740060 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.740074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.740099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.740115 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.740126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.740136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.740146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.740157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.740181 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.740191 | orchestrator | 2026-03-10 01:08:12.740201 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-10 01:08:12.740212 | orchestrator | Tuesday 10 March 2026 01:06:28 +0000 (0:00:03.705) 0:00:47.484 ********* 2026-03-10 01:08:12.740221 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-10 01:08:12.740231 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-10 01:08:12.740241 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-10 01:08:12.740250 | orchestrator | 2026-03-10 01:08:12.740264 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-10 01:08:12.740274 | orchestrator | Tuesday 10 March 2026 01:06:31 +0000 (0:00:02.925) 0:00:50.410 ********* 2026-03-10 01:08:12.740284 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:12.740293 | orchestrator | 2026-03-10 01:08:12.740303 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-10 01:08:12.740313 | orchestrator | Tuesday 10 March 2026 01:06:32 +0000 (0:00:00.403) 0:00:50.814 ********* 2026-03-10 01:08:12.740343 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:12.740353 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:08:12.740363 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:08:12.740373 | orchestrator | 2026-03-10 01:08:12.740382 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-10 01:08:12.740392 | orchestrator | Tuesday 10 March 2026 01:06:33 +0000 (0:00:00.944) 0:00:51.759 ********* 2026-03-10 01:08:12.740401 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:08:12.740411 | orchestrator | 2026-03-10 01:08:12.740420 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-10 01:08:12.740430 | orchestrator | Tuesday 10 March 2026 01:06:33 +0000 (0:00:00.789) 0:00:52.548 ********* 2026-03-10 01:08:12.740440 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.740451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.740471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.740488 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.740524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.740536 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.740546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.740569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.740579 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.740589 | orchestrator | 2026-03-10 01:08:12.740599 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-10 01:08:12.740609 | orchestrator | Tuesday 10 March 2026 01:06:38 +0000 (0:00:04.325) 0:00:56.874 ********* 2026-03-10 01:08:12.740630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:12.740641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.740652 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.740663 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:12.740673 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:12.740690 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.740705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.740715 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:08:12.740730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:12.740740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.740750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.740766 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:08:12.740776 | orchestrator | 2026-03-10 01:08:12.740786 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-10 01:08:12.740796 | orchestrator | Tuesday 10 March 2026 01:06:40 +0000 (0:00:02.526) 0:00:59.402 ********* 2026-03-10 01:08:12.740806 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:12.740816 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.741036 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.741093 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:12.741107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:12.741118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.741143 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.741151 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:08:12.741160 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:12.741181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.741194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.741203 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:08:12.741211 | orchestrator | 2026-03-10 01:08:12.741220 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-10 01:08:12.741229 | orchestrator | Tuesday 10 March 2026 01:06:43 +0000 (0:00:02.446) 0:01:01.848 ********* 2026-03-10 01:08:12.741237 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.741252 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.741260 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.741274 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741286 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741349 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741359 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741368 | orchestrator | 2026-03-10 01:08:12.741376 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-10 01:08:12.741384 | orchestrator | Tuesday 10 March 2026 01:06:47 +0000 (0:00:03.919) 0:01:05.768 ********* 2026-03-10 01:08:12.741392 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:08:12.741399 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:12.741407 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:08:12.741415 | orchestrator | 2026-03-10 01:08:12.741423 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-10 01:08:12.741430 | orchestrator | Tuesday 10 March 2026 01:06:51 +0000 (0:00:04.606) 0:01:10.374 ********* 2026-03-10 01:08:12.741438 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:08:12.741446 | orchestrator | 2026-03-10 01:08:12.741453 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-10 01:08:12.741461 | orchestrator | Tuesday 10 March 2026 01:06:55 +0000 (0:00:03.492) 0:01:13.867 ********* 2026-03-10 01:08:12.741473 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:12.741482 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:08:12.741490 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:08:12.741497 | orchestrator | 2026-03-10 01:08:12.741505 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-10 01:08:12.741513 | orchestrator | Tuesday 10 March 2026 01:06:57 +0000 (0:00:02.494) 0:01:16.361 ********* 2026-03-10 01:08:12.741535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.741559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.741568 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.741576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741589 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741640 | orchestrator | 2026-03-10 01:08:12.741649 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-10 01:08:12.741657 | orchestrator | Tuesday 10 March 2026 01:07:12 +0000 (0:00:15.055) 0:01:31.417 ********* 2026-03-10 01:08:12.741665 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:12.741682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:12.741696 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.741704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.741712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.741720 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:08:12.741729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.741737 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:12.741745 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-10 01:08:12.741764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.741777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:08:12.741785 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:08:12.741793 | orchestrator | 2026-03-10 01:08:12.741801 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-10 01:08:12.741809 | orchestrator | Tuesday 10 March 2026 01:07:13 +0000 (0:00:01.159) 0:01:32.576 ********* 2026-03-10 01:08:12.741817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.741826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.741838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-10 01:08:12.741859 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741902 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:08:12.741915 | orchestrator | 2026-03-10 01:08:12.741924 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-10 01:08:12.741932 | orchestrator | Tuesday 10 March 2026 01:07:19 +0000 (0:00:05.760) 0:01:38.336 ********* 2026-03-10 01:08:12.741939 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:08:12.741948 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:08:12.741956 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:08:12.741963 | orchestrator | 2026-03-10 01:08:12.741975 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-10 01:08:12.741984 | orchestrator | Tuesday 10 March 2026 01:07:20 +0000 (0:00:00.654) 0:01:38.991 ********* 2026-03-10 01:08:12.741992 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:12.742000 | orchestrator | 2026-03-10 01:08:12.742008 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-10 01:08:12.742045 | orchestrator | Tuesday 10 March 2026 01:07:22 +0000 (0:00:02.714) 0:01:41.705 ********* 2026-03-10 01:08:12.742055 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:12.742064 | orchestrator | 2026-03-10 01:08:12.742072 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-10 01:08:12.742092 | orchestrator | Tuesday 10 March 2026 01:07:25 +0000 (0:00:02.784) 0:01:44.489 ********* 2026-03-10 01:08:12.742100 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:12.742108 | orchestrator | 2026-03-10 01:08:12.742116 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-10 01:08:12.742124 | orchestrator | Tuesday 10 March 2026 01:07:39 +0000 (0:00:13.323) 0:01:57.813 ********* 2026-03-10 01:08:12.742132 | orchestrator | 2026-03-10 01:08:12.742140 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-10 01:08:12.742148 | orchestrator | Tuesday 10 March 2026 01:07:39 +0000 (0:00:00.078) 0:01:57.892 ********* 2026-03-10 01:08:12.742155 | orchestrator | 2026-03-10 01:08:12.742163 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-10 01:08:12.742171 | orchestrator | Tuesday 10 March 2026 01:07:39 +0000 (0:00:00.103) 0:01:57.995 ********* 2026-03-10 01:08:12.742179 | orchestrator | 2026-03-10 01:08:12.742186 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-10 01:08:12.742194 | orchestrator | Tuesday 10 March 2026 01:07:39 +0000 (0:00:00.069) 0:01:58.065 ********* 2026-03-10 01:08:12.742202 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:12.742210 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:08:12.742217 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:08:12.742225 | orchestrator | 2026-03-10 01:08:12.742233 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-10 01:08:12.742241 | orchestrator | Tuesday 10 March 2026 01:07:48 +0000 (0:00:09.333) 0:02:07.398 ********* 2026-03-10 01:08:12.742248 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:08:12.742256 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:12.742264 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:08:12.742272 | orchestrator | 2026-03-10 01:08:12.742280 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-10 01:08:12.742288 | orchestrator | Tuesday 10 March 2026 01:08:01 +0000 (0:00:12.431) 0:02:19.830 ********* 2026-03-10 01:08:12.742296 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:08:12.742304 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:08:12.742311 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:08:12.742348 | orchestrator | 2026-03-10 01:08:12.742359 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:08:12.742367 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:08:12.742376 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 01:08:12.742393 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 01:08:12.742401 | orchestrator | 2026-03-10 01:08:12.742409 | orchestrator | 2026-03-10 01:08:12.742418 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:08:12.742425 | orchestrator | Tuesday 10 March 2026 01:08:10 +0000 (0:00:09.796) 0:02:29.626 ********* 2026-03-10 01:08:12.742433 | orchestrator | =============================================================================== 2026-03-10 01:08:12.742441 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.32s 2026-03-10 01:08:12.742449 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 15.05s 2026-03-10 01:08:12.742458 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 13.32s 2026-03-10 01:08:12.742465 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 12.43s 2026-03-10 01:08:12.742473 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 9.80s 2026-03-10 01:08:12.742482 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.33s 2026-03-10 01:08:12.742492 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.27s 2026-03-10 01:08:12.742507 | orchestrator | barbican : Check barbican containers ------------------------------------ 5.76s 2026-03-10 01:08:12.742520 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.71s 2026-03-10 01:08:12.742534 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 4.61s 2026-03-10 01:08:12.742548 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.52s 2026-03-10 01:08:12.742561 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.33s 2026-03-10 01:08:12.742575 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.97s 2026-03-10 01:08:12.742588 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.92s 2026-03-10 01:08:12.742602 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.78s 2026-03-10 01:08:12.742617 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 3.71s 2026-03-10 01:08:12.742631 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 3.49s 2026-03-10 01:08:12.742658 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 2.93s 2026-03-10 01:08:12.742667 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.78s 2026-03-10 01:08:12.742675 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.71s 2026-03-10 01:08:12.742683 | orchestrator | 2026-03-10 01:08:12 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:12.742691 | orchestrator | 2026-03-10 01:08:12 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:12.742705 | orchestrator | 2026-03-10 01:08:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:15.770514 | orchestrator | 2026-03-10 01:08:15 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:15.772748 | orchestrator | 2026-03-10 01:08:15 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:15.773335 | orchestrator | 2026-03-10 01:08:15 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:15.774100 | orchestrator | 2026-03-10 01:08:15 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:15.774121 | orchestrator | 2026-03-10 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:18.815690 | orchestrator | 2026-03-10 01:08:18 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:18.816760 | orchestrator | 2026-03-10 01:08:18 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:18.817726 | orchestrator | 2026-03-10 01:08:18 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:18.818976 | orchestrator | 2026-03-10 01:08:18 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:18.819250 | orchestrator | 2026-03-10 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:21.887904 | orchestrator | 2026-03-10 01:08:21 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:21.889821 | orchestrator | 2026-03-10 01:08:21 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:21.893783 | orchestrator | 2026-03-10 01:08:21 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:21.897639 | orchestrator | 2026-03-10 01:08:21 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:21.898402 | orchestrator | 2026-03-10 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:24.946825 | orchestrator | 2026-03-10 01:08:24 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:24.948813 | orchestrator | 2026-03-10 01:08:24 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:24.950406 | orchestrator | 2026-03-10 01:08:24 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:24.953049 | orchestrator | 2026-03-10 01:08:24 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:24.953181 | orchestrator | 2026-03-10 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:27.997754 | orchestrator | 2026-03-10 01:08:27 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:27.998531 | orchestrator | 2026-03-10 01:08:27 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:27.999565 | orchestrator | 2026-03-10 01:08:28 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:28.000790 | orchestrator | 2026-03-10 01:08:28 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:28.000820 | orchestrator | 2026-03-10 01:08:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:31.050856 | orchestrator | 2026-03-10 01:08:31 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:31.051843 | orchestrator | 2026-03-10 01:08:31 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:31.052999 | orchestrator | 2026-03-10 01:08:31 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:31.054702 | orchestrator | 2026-03-10 01:08:31 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:31.054790 | orchestrator | 2026-03-10 01:08:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:34.097122 | orchestrator | 2026-03-10 01:08:34 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:34.097567 | orchestrator | 2026-03-10 01:08:34 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:34.098420 | orchestrator | 2026-03-10 01:08:34 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:34.098897 | orchestrator | 2026-03-10 01:08:34 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:34.098932 | orchestrator | 2026-03-10 01:08:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:37.147880 | orchestrator | 2026-03-10 01:08:37 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:37.148099 | orchestrator | 2026-03-10 01:08:37 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:37.149034 | orchestrator | 2026-03-10 01:08:37 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:37.149617 | orchestrator | 2026-03-10 01:08:37 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:37.149651 | orchestrator | 2026-03-10 01:08:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:40.185377 | orchestrator | 2026-03-10 01:08:40 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:40.186051 | orchestrator | 2026-03-10 01:08:40 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:40.188116 | orchestrator | 2026-03-10 01:08:40 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:40.189188 | orchestrator | 2026-03-10 01:08:40 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:40.189256 | orchestrator | 2026-03-10 01:08:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:43.224791 | orchestrator | 2026-03-10 01:08:43 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:43.225648 | orchestrator | 2026-03-10 01:08:43 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:43.226612 | orchestrator | 2026-03-10 01:08:43 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:43.227387 | orchestrator | 2026-03-10 01:08:43 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:43.227560 | orchestrator | 2026-03-10 01:08:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:46.288763 | orchestrator | 2026-03-10 01:08:46 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:46.289523 | orchestrator | 2026-03-10 01:08:46 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:46.291580 | orchestrator | 2026-03-10 01:08:46 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:46.293361 | orchestrator | 2026-03-10 01:08:46 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:46.293400 | orchestrator | 2026-03-10 01:08:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:49.412648 | orchestrator | 2026-03-10 01:08:49 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:49.413227 | orchestrator | 2026-03-10 01:08:49 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:49.414740 | orchestrator | 2026-03-10 01:08:49 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:49.415768 | orchestrator | 2026-03-10 01:08:49 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:49.415791 | orchestrator | 2026-03-10 01:08:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:52.481415 | orchestrator | 2026-03-10 01:08:52 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:52.485184 | orchestrator | 2026-03-10 01:08:52 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:52.488157 | orchestrator | 2026-03-10 01:08:52 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:52.490711 | orchestrator | 2026-03-10 01:08:52 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:52.490762 | orchestrator | 2026-03-10 01:08:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:55.542164 | orchestrator | 2026-03-10 01:08:55 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:55.542260 | orchestrator | 2026-03-10 01:08:55 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:55.542276 | orchestrator | 2026-03-10 01:08:55 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:55.542288 | orchestrator | 2026-03-10 01:08:55 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:55.542299 | orchestrator | 2026-03-10 01:08:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:08:58.567730 | orchestrator | 2026-03-10 01:08:58 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:08:58.571846 | orchestrator | 2026-03-10 01:08:58 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:08:58.572340 | orchestrator | 2026-03-10 01:08:58 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:08:58.574198 | orchestrator | 2026-03-10 01:08:58 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:08:58.574253 | orchestrator | 2026-03-10 01:08:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:01.608558 | orchestrator | 2026-03-10 01:09:01 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:01.609880 | orchestrator | 2026-03-10 01:09:01 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:09:01.611965 | orchestrator | 2026-03-10 01:09:01 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:01.613237 | orchestrator | 2026-03-10 01:09:01 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:01.613347 | orchestrator | 2026-03-10 01:09:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:04.641600 | orchestrator | 2026-03-10 01:09:04 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:04.643517 | orchestrator | 2026-03-10 01:09:04 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:09:04.644182 | orchestrator | 2026-03-10 01:09:04 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:04.644916 | orchestrator | 2026-03-10 01:09:04 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:04.644945 | orchestrator | 2026-03-10 01:09:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:07.684583 | orchestrator | 2026-03-10 01:09:07 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:07.684656 | orchestrator | 2026-03-10 01:09:07 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:09:07.684726 | orchestrator | 2026-03-10 01:09:07 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:07.685471 | orchestrator | 2026-03-10 01:09:07 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:07.685496 | orchestrator | 2026-03-10 01:09:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:10.718625 | orchestrator | 2026-03-10 01:09:10 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:10.719385 | orchestrator | 2026-03-10 01:09:10 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:09:10.720639 | orchestrator | 2026-03-10 01:09:10 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:10.721545 | orchestrator | 2026-03-10 01:09:10 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:10.721574 | orchestrator | 2026-03-10 01:09:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:13.766938 | orchestrator | 2026-03-10 01:09:13 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:13.767656 | orchestrator | 2026-03-10 01:09:13 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:09:13.770244 | orchestrator | 2026-03-10 01:09:13 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:13.771541 | orchestrator | 2026-03-10 01:09:13 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:13.771595 | orchestrator | 2026-03-10 01:09:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:16.869081 | orchestrator | 2026-03-10 01:09:16 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:16.869944 | orchestrator | 2026-03-10 01:09:16 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:09:16.871004 | orchestrator | 2026-03-10 01:09:16 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:16.872359 | orchestrator | 2026-03-10 01:09:16 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:16.872463 | orchestrator | 2026-03-10 01:09:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:19.946343 | orchestrator | 2026-03-10 01:09:19 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:19.948529 | orchestrator | 2026-03-10 01:09:19 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:09:19.950490 | orchestrator | 2026-03-10 01:09:19 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:19.953610 | orchestrator | 2026-03-10 01:09:19 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:19.953655 | orchestrator | 2026-03-10 01:09:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:23.000902 | orchestrator | 2026-03-10 01:09:22 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:23.001024 | orchestrator | 2026-03-10 01:09:22 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:09:23.003897 | orchestrator | 2026-03-10 01:09:23 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:23.007835 | orchestrator | 2026-03-10 01:09:23 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:23.007924 | orchestrator | 2026-03-10 01:09:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:26.060974 | orchestrator | 2026-03-10 01:09:26 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:26.062187 | orchestrator | 2026-03-10 01:09:26 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:09:26.064543 | orchestrator | 2026-03-10 01:09:26 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:26.068109 | orchestrator | 2026-03-10 01:09:26 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:26.068166 | orchestrator | 2026-03-10 01:09:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:29.106913 | orchestrator | 2026-03-10 01:09:29 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:29.108815 | orchestrator | 2026-03-10 01:09:29 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:09:29.109909 | orchestrator | 2026-03-10 01:09:29 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:29.112019 | orchestrator | 2026-03-10 01:09:29 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:29.112091 | orchestrator | 2026-03-10 01:09:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:32.162760 | orchestrator | 2026-03-10 01:09:32 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:32.164049 | orchestrator | 2026-03-10 01:09:32 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state STARTED 2026-03-10 01:09:32.165949 | orchestrator | 2026-03-10 01:09:32 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:32.166622 | orchestrator | 2026-03-10 01:09:32 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:32.166669 | orchestrator | 2026-03-10 01:09:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:35.223047 | orchestrator | 2026-03-10 01:09:35 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:35.224261 | orchestrator | 2026-03-10 01:09:35 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:09:35.227760 | orchestrator | 2026-03-10 01:09:35 | INFO  | Task 3b8b43fb-f258-4889-8316-fdf8a8a2d52d is in state SUCCESS 2026-03-10 01:09:35.229672 | orchestrator | 2026-03-10 01:09:35.229733 | orchestrator | 2026-03-10 01:09:35.229740 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:09:35.229746 | orchestrator | 2026-03-10 01:09:35.229750 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:09:35.229754 | orchestrator | Tuesday 10 March 2026 01:05:47 +0000 (0:00:00.391) 0:00:00.391 ********* 2026-03-10 01:09:35.229758 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:09:35.229764 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:09:35.229768 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:09:35.229771 | orchestrator | 2026-03-10 01:09:35.229776 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:09:35.229780 | orchestrator | Tuesday 10 March 2026 01:05:47 +0000 (0:00:00.391) 0:00:00.783 ********* 2026-03-10 01:09:35.229784 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-10 01:09:35.229788 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-10 01:09:35.229792 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-10 01:09:35.229796 | orchestrator | 2026-03-10 01:09:35.229799 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-10 01:09:35.229804 | orchestrator | 2026-03-10 01:09:35.229807 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-10 01:09:35.229816 | orchestrator | Tuesday 10 March 2026 01:05:48 +0000 (0:00:00.703) 0:00:01.487 ********* 2026-03-10 01:09:35.229833 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:09:35.229838 | orchestrator | 2026-03-10 01:09:35.229841 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-10 01:09:35.229845 | orchestrator | Tuesday 10 March 2026 01:05:49 +0000 (0:00:00.852) 0:00:02.339 ********* 2026-03-10 01:09:35.229849 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-10 01:09:35.229852 | orchestrator | 2026-03-10 01:09:35.229861 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-10 01:09:35.229865 | orchestrator | Tuesday 10 March 2026 01:05:53 +0000 (0:00:03.986) 0:00:06.325 ********* 2026-03-10 01:09:35.229882 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-10 01:09:35.229887 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-10 01:09:35.229890 | orchestrator | 2026-03-10 01:09:35.229894 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-10 01:09:35.229898 | orchestrator | Tuesday 10 March 2026 01:06:00 +0000 (0:00:06.934) 0:00:13.260 ********* 2026-03-10 01:09:35.229911 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:09:35.229915 | orchestrator | 2026-03-10 01:09:35.229925 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-10 01:09:35.229929 | orchestrator | Tuesday 10 March 2026 01:06:03 +0000 (0:00:03.506) 0:00:16.767 ********* 2026-03-10 01:09:35.229938 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-10 01:09:35.229942 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:09:35.229946 | orchestrator | 2026-03-10 01:09:35.229950 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-10 01:09:35.229954 | orchestrator | Tuesday 10 March 2026 01:06:07 +0000 (0:00:04.056) 0:00:20.824 ********* 2026-03-10 01:09:35.229963 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:09:35.229967 | orchestrator | 2026-03-10 01:09:35.229971 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-10 01:09:35.229980 | orchestrator | Tuesday 10 March 2026 01:06:12 +0000 (0:00:04.218) 0:00:25.042 ********* 2026-03-10 01:09:35.229984 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-10 01:09:35.229988 | orchestrator | 2026-03-10 01:09:35.229997 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-10 01:09:35.230001 | orchestrator | Tuesday 10 March 2026 01:06:17 +0000 (0:00:05.086) 0:00:30.128 ********* 2026-03-10 01:09:35.230044 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.230119 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.230147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.230151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230160 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230173 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230183 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230188 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230220 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230234 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230272 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230282 | orchestrator | 2026-03-10 01:09:35.230287 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-10 01:09:35.230348 | orchestrator | Tuesday 10 March 2026 01:06:22 +0000 (0:00:05.113) 0:00:35.241 ********* 2026-03-10 01:09:35.230355 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:09:35.230360 | orchestrator | 2026-03-10 01:09:35.230364 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-10 01:09:35.230368 | orchestrator | Tuesday 10 March 2026 01:06:22 +0000 (0:00:00.515) 0:00:35.756 ********* 2026-03-10 01:09:35.230373 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:09:35.230379 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:09:35.230385 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:09:35.230391 | orchestrator | 2026-03-10 01:09:35.230398 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-10 01:09:35.230403 | orchestrator | Tuesday 10 March 2026 01:06:23 +0000 (0:00:00.777) 0:00:36.534 ********* 2026-03-10 01:09:35.230409 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:09:35.230416 | orchestrator | 2026-03-10 01:09:35.230422 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-10 01:09:35.230428 | orchestrator | Tuesday 10 March 2026 01:06:26 +0000 (0:00:03.373) 0:00:39.907 ********* 2026-03-10 01:09:35.230434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.230453 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.230464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.230471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230492 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230531 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230565 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.230609 | orchestrator | 2026-03-10 01:09:35.230616 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-10 01:09:35.230622 | orchestrator | Tuesday 10 March 2026 01:06:35 +0000 (0:00:08.560) 0:00:48.468 ********* 2026-03-10 01:09:35.230629 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.230649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:09:35.230657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230673 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230677 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:09:35.230681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.230689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:09:35.230816 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230890 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:09:35.230894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.230903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:09:35.230911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230915 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230930 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:09:35.230934 | orchestrator | 2026-03-10 01:09:35.230938 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-10 01:09:35.230942 | orchestrator | Tuesday 10 March 2026 01:06:38 +0000 (0:00:02.660) 0:00:51.128 ********* 2026-03-10 01:09:35.230963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.230972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:09:35.230979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230995 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.230999 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:09:35.231003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.231011 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:09:35.231017 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231028 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231036 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:09:35.231040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.231048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:09:35.231052 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231111 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:09:35.231115 | orchestrator | 2026-03-10 01:09:35.231119 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-10 01:09:35.231123 | orchestrator | Tuesday 10 March 2026 01:06:42 +0000 (0:00:03.984) 0:00:55.113 ********* 2026-03-10 01:09:35.231131 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.231135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.231143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.231150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231174 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231185 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231214 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231224 | orchestrator | 2026-03-10 01:09:35.231228 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-10 01:09:35.231236 | orchestrator | Tuesday 10 March 2026 01:06:49 +0000 (0:00:07.892) 0:01:03.005 ********* 2026-03-10 01:09:35.231240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.231244 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.231248 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.231395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231415 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231427 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231439 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231465 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231484 | orchestrator | 2026-03-10 01:09:35.231488 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-10 01:09:35.231492 | orchestrator | Tuesday 10 March 2026 01:07:22 +0000 (0:00:32.682) 0:01:35.688 ********* 2026-03-10 01:09:35.231498 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-10 01:09:35.231502 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-10 01:09:35.231506 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-10 01:09:35.231510 | orchestrator | 2026-03-10 01:09:35.231514 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-10 01:09:35.231518 | orchestrator | Tuesday 10 March 2026 01:07:30 +0000 (0:00:07.945) 0:01:43.634 ********* 2026-03-10 01:09:35.231521 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-10 01:09:35.231525 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-10 01:09:35.231529 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-10 01:09:35.231533 | orchestrator | 2026-03-10 01:09:35.231536 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-10 01:09:35.231540 | orchestrator | Tuesday 10 March 2026 01:07:34 +0000 (0:00:03.422) 0:01:47.056 ********* 2026-03-10 01:09:35.231544 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.231548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.231556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.231566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231585 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231589 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231610 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231664 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231669 | orchestrator | 2026-03-10 01:09:35.231673 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-10 01:09:35.231676 | orchestrator | Tuesday 10 March 2026 01:07:38 +0000 (0:00:03.985) 0:01:51.041 ********* 2026-03-10 01:09:35.231680 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.231684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.231708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.231718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231726 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231750 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231763 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231775 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.231873 | orchestrator | 2026-03-10 01:09:35.231877 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-10 01:09:35.231881 | orchestrator | Tuesday 10 March 2026 01:07:41 +0000 (0:00:03.239) 0:01:54.281 ********* 2026-03-10 01:09:35.231885 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:09:35.231888 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:09:35.231892 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:09:35.231896 | orchestrator | 2026-03-10 01:09:35.231902 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-10 01:09:35.231906 | orchestrator | Tuesday 10 March 2026 01:07:43 +0000 (0:00:01.854) 0:01:56.136 ********* 2026-03-10 01:09:35.231910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.231914 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:09:35.231918 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231926 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231942 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:09:35.231949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.231953 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:09:35.231957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.231980 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:09:35.231986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-10 01:09:35.231991 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-10 01:09:35.231995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.232002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.232006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.232012 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:09:35.232016 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:09:35.232021 | orchestrator | 2026-03-10 01:09:35.232024 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-10 01:09:35.232028 | orchestrator | Tuesday 10 March 2026 01:07:44 +0000 (0:00:01.563) 0:01:57.700 ********* 2026-03-10 01:09:35.232034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.232039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.232047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-10 01:09:35.232051 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232079 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232088 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232092 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232103 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:09:35.232128 | orchestrator | 2026-03-10 01:09:35.232132 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-10 01:09:35.232136 | orchestrator | Tuesday 10 March 2026 01:07:49 +0000 (0:00:04.947) 0:02:02.647 ********* 2026-03-10 01:09:35.232140 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:09:35.232144 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:09:35.232148 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:09:35.232151 | orchestrator | 2026-03-10 01:09:35.232155 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-10 01:09:35.232159 | orchestrator | Tuesday 10 March 2026 01:07:50 +0000 (0:00:00.883) 0:02:03.531 ********* 2026-03-10 01:09:35.232163 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-10 01:09:35.232167 | orchestrator | 2026-03-10 01:09:35.232171 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-10 01:09:35.232174 | orchestrator | Tuesday 10 March 2026 01:07:53 +0000 (0:00:02.672) 0:02:06.203 ********* 2026-03-10 01:09:35.232178 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 01:09:35.232182 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-10 01:09:35.232186 | orchestrator | 2026-03-10 01:09:35.232193 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-10 01:09:35.232197 | orchestrator | Tuesday 10 March 2026 01:07:55 +0000 (0:00:02.568) 0:02:08.772 ********* 2026-03-10 01:09:35.232201 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:09:35.232205 | orchestrator | 2026-03-10 01:09:35.232212 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-10 01:09:35.232216 | orchestrator | Tuesday 10 March 2026 01:08:12 +0000 (0:00:17.020) 0:02:25.792 ********* 2026-03-10 01:09:35.232220 | orchestrator | 2026-03-10 01:09:35.232224 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-10 01:09:35.232228 | orchestrator | Tuesday 10 March 2026 01:08:12 +0000 (0:00:00.082) 0:02:25.875 ********* 2026-03-10 01:09:35.232232 | orchestrator | 2026-03-10 01:09:35.232235 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-10 01:09:35.232239 | orchestrator | Tuesday 10 March 2026 01:08:12 +0000 (0:00:00.063) 0:02:25.939 ********* 2026-03-10 01:09:35.232243 | orchestrator | 2026-03-10 01:09:35.232246 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-10 01:09:35.232250 | orchestrator | Tuesday 10 March 2026 01:08:13 +0000 (0:00:00.149) 0:02:26.088 ********* 2026-03-10 01:09:35.232254 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:09:35.232258 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:09:35.232262 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:09:35.232266 | orchestrator | 2026-03-10 01:09:35.232269 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-10 01:09:35.232273 | orchestrator | Tuesday 10 March 2026 01:08:23 +0000 (0:00:10.286) 0:02:36.374 ********* 2026-03-10 01:09:35.232277 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:09:35.232281 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:09:35.232284 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:09:35.232288 | orchestrator | 2026-03-10 01:09:35.232307 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-10 01:09:35.232311 | orchestrator | Tuesday 10 March 2026 01:08:37 +0000 (0:00:14.151) 0:02:50.526 ********* 2026-03-10 01:09:35.232315 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:09:35.232318 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:09:35.232322 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:09:35.232326 | orchestrator | 2026-03-10 01:09:35.232330 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-10 01:09:35.232334 | orchestrator | Tuesday 10 March 2026 01:08:54 +0000 (0:00:17.008) 0:03:07.535 ********* 2026-03-10 01:09:35.232337 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:09:35.232342 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:09:35.232346 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:09:35.232349 | orchestrator | 2026-03-10 01:09:35.232353 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-10 01:09:35.232357 | orchestrator | Tuesday 10 March 2026 01:09:07 +0000 (0:00:12.638) 0:03:20.173 ********* 2026-03-10 01:09:35.232361 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:09:35.232364 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:09:35.232368 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:09:35.232372 | orchestrator | 2026-03-10 01:09:35.232376 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-10 01:09:35.232380 | orchestrator | Tuesday 10 March 2026 01:09:16 +0000 (0:00:09.712) 0:03:29.885 ********* 2026-03-10 01:09:35.232383 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:09:35.232387 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:09:35.232391 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:09:35.232395 | orchestrator | 2026-03-10 01:09:35.232398 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-10 01:09:35.232404 | orchestrator | Tuesday 10 March 2026 01:09:24 +0000 (0:00:07.973) 0:03:37.859 ********* 2026-03-10 01:09:35.232410 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:09:35.232416 | orchestrator | 2026-03-10 01:09:35.232423 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:09:35.232429 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:09:35.232437 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 01:09:35.232455 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 01:09:35.232461 | orchestrator | 2026-03-10 01:09:35.232468 | orchestrator | 2026-03-10 01:09:35.232479 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:09:35.232486 | orchestrator | Tuesday 10 March 2026 01:09:33 +0000 (0:00:08.218) 0:03:46.078 ********* 2026-03-10 01:09:35.232493 | orchestrator | =============================================================================== 2026-03-10 01:09:35.232499 | orchestrator | designate : Copying over designate.conf -------------------------------- 32.68s 2026-03-10 01:09:35.232505 | orchestrator | designate : Running Designate bootstrap container ---------------------- 17.02s 2026-03-10 01:09:35.232512 | orchestrator | designate : Restart designate-central container ------------------------ 17.01s 2026-03-10 01:09:35.232518 | orchestrator | designate : Restart designate-api container ---------------------------- 14.15s 2026-03-10 01:09:35.232523 | orchestrator | designate : Restart designate-producer container ----------------------- 12.64s 2026-03-10 01:09:35.232529 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 10.29s 2026-03-10 01:09:35.232536 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.71s 2026-03-10 01:09:35.232543 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 8.56s 2026-03-10 01:09:35.232549 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 8.22s 2026-03-10 01:09:35.232556 | orchestrator | designate : Restart designate-worker container -------------------------- 7.97s 2026-03-10 01:09:35.232568 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.95s 2026-03-10 01:09:35.232575 | orchestrator | designate : Copying over config.json files for services ----------------- 7.89s 2026-03-10 01:09:35.232581 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.93s 2026-03-10 01:09:35.232588 | orchestrator | designate : Ensuring config directories exist --------------------------- 5.11s 2026-03-10 01:09:35.232595 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 5.09s 2026-03-10 01:09:35.232601 | orchestrator | designate : Check designate containers ---------------------------------- 4.95s 2026-03-10 01:09:35.232608 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.22s 2026-03-10 01:09:35.232615 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.06s 2026-03-10 01:09:35.232622 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.99s 2026-03-10 01:09:35.232629 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.99s 2026-03-10 01:09:35.232636 | orchestrator | 2026-03-10 01:09:35 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:35.232771 | orchestrator | 2026-03-10 01:09:35 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:35.233932 | orchestrator | 2026-03-10 01:09:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:38.297799 | orchestrator | 2026-03-10 01:09:38 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:38.298371 | orchestrator | 2026-03-10 01:09:38 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:09:38.299624 | orchestrator | 2026-03-10 01:09:38 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:38.300406 | orchestrator | 2026-03-10 01:09:38 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:38.300444 | orchestrator | 2026-03-10 01:09:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:41.350702 | orchestrator | 2026-03-10 01:09:41 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:41.352315 | orchestrator | 2026-03-10 01:09:41 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:09:41.354557 | orchestrator | 2026-03-10 01:09:41 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:41.356167 | orchestrator | 2026-03-10 01:09:41 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:41.356335 | orchestrator | 2026-03-10 01:09:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:44.394271 | orchestrator | 2026-03-10 01:09:44 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:44.396159 | orchestrator | 2026-03-10 01:09:44 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:09:44.405936 | orchestrator | 2026-03-10 01:09:44 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:44.409771 | orchestrator | 2026-03-10 01:09:44 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:44.410559 | orchestrator | 2026-03-10 01:09:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:47.453424 | orchestrator | 2026-03-10 01:09:47 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:47.454877 | orchestrator | 2026-03-10 01:09:47 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:09:47.456450 | orchestrator | 2026-03-10 01:09:47 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:47.459061 | orchestrator | 2026-03-10 01:09:47 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:47.459615 | orchestrator | 2026-03-10 01:09:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:50.515821 | orchestrator | 2026-03-10 01:09:50 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:50.517693 | orchestrator | 2026-03-10 01:09:50 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:09:50.520732 | orchestrator | 2026-03-10 01:09:50 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:50.522554 | orchestrator | 2026-03-10 01:09:50 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:50.522585 | orchestrator | 2026-03-10 01:09:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:53.568611 | orchestrator | 2026-03-10 01:09:53 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:53.569093 | orchestrator | 2026-03-10 01:09:53 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:09:53.570262 | orchestrator | 2026-03-10 01:09:53 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:53.571134 | orchestrator | 2026-03-10 01:09:53 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:53.571232 | orchestrator | 2026-03-10 01:09:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:56.617581 | orchestrator | 2026-03-10 01:09:56 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:56.620103 | orchestrator | 2026-03-10 01:09:56 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:09:56.622545 | orchestrator | 2026-03-10 01:09:56 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:56.623754 | orchestrator | 2026-03-10 01:09:56 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:56.623812 | orchestrator | 2026-03-10 01:09:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:09:59.659036 | orchestrator | 2026-03-10 01:09:59 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:09:59.659413 | orchestrator | 2026-03-10 01:09:59 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:09:59.660930 | orchestrator | 2026-03-10 01:09:59 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:09:59.661742 | orchestrator | 2026-03-10 01:09:59 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:09:59.662598 | orchestrator | 2026-03-10 01:09:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:02.704277 | orchestrator | 2026-03-10 01:10:02 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state STARTED 2026-03-10 01:10:02.705375 | orchestrator | 2026-03-10 01:10:02 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:02.707046 | orchestrator | 2026-03-10 01:10:02 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:02.708438 | orchestrator | 2026-03-10 01:10:02 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:02.708476 | orchestrator | 2026-03-10 01:10:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:05.755132 | orchestrator | 2026-03-10 01:10:05 | INFO  | Task 9962047c-59b9-4bd8-9088-77246d750888 is in state SUCCESS 2026-03-10 01:10:05.756478 | orchestrator | 2026-03-10 01:10:05.756525 | orchestrator | 2026-03-10 01:10:05.756532 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:10:05.756537 | orchestrator | 2026-03-10 01:10:05.756542 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:10:05.756547 | orchestrator | Tuesday 10 March 2026 01:05:38 +0000 (0:00:00.300) 0:00:00.300 ********* 2026-03-10 01:10:05.756552 | orchestrator | ok: [testbed-manager] 2026-03-10 01:10:05.756558 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:10:05.756562 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:10:05.756567 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:10:05.756571 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:10:05.756575 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:10:05.756580 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:10:05.756584 | orchestrator | 2026-03-10 01:10:05.756588 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:10:05.756593 | orchestrator | Tuesday 10 March 2026 01:05:40 +0000 (0:00:01.291) 0:00:01.591 ********* 2026-03-10 01:10:05.756598 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-10 01:10:05.756603 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-10 01:10:05.756607 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-10 01:10:05.756611 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-10 01:10:05.756615 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-10 01:10:05.756620 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-10 01:10:05.756624 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-10 01:10:05.756628 | orchestrator | 2026-03-10 01:10:05.756633 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-10 01:10:05.756637 | orchestrator | 2026-03-10 01:10:05.756642 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-10 01:10:05.756646 | orchestrator | Tuesday 10 March 2026 01:05:41 +0000 (0:00:00.897) 0:00:02.489 ********* 2026-03-10 01:10:05.756651 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:10:05.756674 | orchestrator | 2026-03-10 01:10:05.756679 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-10 01:10:05.756683 | orchestrator | Tuesday 10 March 2026 01:05:43 +0000 (0:00:02.484) 0:00:04.974 ********* 2026-03-10 01:10:05.756701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.756709 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 01:10:05.756716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.756722 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.756737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.756742 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.756747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.756760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.756766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.756771 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.756776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.756780 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.756790 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.756795 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.756799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.756812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.756817 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.756822 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.756854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.756875 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 01:10:05.756890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.756903 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.756916 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.756921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.756925 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.756930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.756934 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.756950 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.756956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.756971 | orchestrator | 2026-03-10 01:10:05.756976 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-10 01:10:05.756980 | orchestrator | Tuesday 10 March 2026 01:05:47 +0000 (0:00:04.324) 0:00:09.298 ********* 2026-03-10 01:10:05.757071 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:10:05.757081 | orchestrator | 2026-03-10 01:10:05.757089 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-10 01:10:05.757095 | orchestrator | Tuesday 10 March 2026 01:05:49 +0000 (0:00:01.790) 0:00:11.089 ********* 2026-03-10 01:10:05.757107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.757115 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 01:10:05.757123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.757128 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.757137 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.757141 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.757150 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.757157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.757169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.757175 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.757182 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.757189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.757202 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.757222 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.757229 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.757246 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.757253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.757261 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.757268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.757274 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.757313 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.757327 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 01:10:05.757335 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.757344 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.757351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.757359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.757366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.757614 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.757668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.757679 | orchestrator | 2026-03-10 01:10:05.757686 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-10 01:10:05.757695 | orchestrator | Tuesday 10 March 2026 01:05:55 +0000 (0:00:06.226) 0:00:17.315 ********* 2026-03-10 01:10:05.757703 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-10 01:10:05.757716 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.757724 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.757757 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-10 01:10:05.757781 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.757789 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.757797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.757805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.757817 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.757825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.757872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.757889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.757939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.757948 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.757956 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.757963 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:10:05.757971 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:05.757978 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:05.757990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.757998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.758005 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.758174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.758203 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:05.758218 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.758227 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758247 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.758254 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:10:05.758262 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758306 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:10:05.758315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.758324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758340 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758348 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:10:05.758356 | orchestrator | 2026-03-10 01:10:05.758364 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-10 01:10:05.758371 | orchestrator | Tuesday 10 March 2026 01:05:57 +0000 (0:00:01.883) 0:00:19.199 ********* 2026-03-10 01:10:05.758379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.758387 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.758400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.758408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.758428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.758441 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.758450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.758458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.758475 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:05.758486 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:05.758495 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-10 01:10:05.758511 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.758518 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758531 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-10 01:10:05.758540 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.758547 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:10:05.758556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.758568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.758580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.758589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.758598 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758609 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758938 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-10 01:10:05.758956 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:10:05.758961 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:05.758966 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.758982 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758987 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.758991 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:10:05.758996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-10 01:10:05.759000 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.759005 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-10 01:10:05.759009 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:10:05.759013 | orchestrator | 2026-03-10 01:10:05.759018 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-10 01:10:05.759023 | orchestrator | Tuesday 10 March 2026 01:06:00 +0000 (0:00:02.544) 0:00:21.743 ********* 2026-03-10 01:10:05.759027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.759037 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 01:10:05.759053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.759060 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.759168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.759181 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.759189 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.759197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.759210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.759225 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.759237 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.759245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.759253 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.759261 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.759270 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.759278 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.759355 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.759366 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.759374 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.759379 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.759384 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.759388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.759394 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 01:10:05.759406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.759411 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.759418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.759423 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.759428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.759432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.759437 | orchestrator | 2026-03-10 01:10:05.759441 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-10 01:10:05.759446 | orchestrator | Tuesday 10 March 2026 01:06:06 +0000 (0:00:06.578) 0:00:28.322 ********* 2026-03-10 01:10:05.759450 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 01:10:05.759456 | orchestrator | 2026-03-10 01:10:05.759463 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-10 01:10:05.759470 | orchestrator | Tuesday 10 March 2026 01:06:08 +0000 (0:00:01.580) 0:00:29.903 ********* 2026-03-10 01:10:05.759476 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101760, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3410747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759498 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101760, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3410747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759510 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101760, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3410747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759517 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1101783, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3469448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759525 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101760, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3410747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759532 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1101783, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3469448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759539 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1101783, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3469448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759554 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101760, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3410747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759564 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101760, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3410747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759575 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1101749, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3401284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759582 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1101783, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3469448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759590 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1101760, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3410747, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.759598 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1101749, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3401284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759605 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1101783, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3469448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759618 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1101749, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3401284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759630 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1101783, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3469448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759642 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101773, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3448236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759649 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1101749, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3401284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759702 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1101749, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3401284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759712 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101773, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3448236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759719 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101743, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759735 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101773, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3448236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759747 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1101749, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3401284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759753 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101743, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759761 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101773, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3448236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759766 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1101783, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3469448, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.759772 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101773, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3448236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759780 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101762, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3418236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759791 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101743, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759799 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101743, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759804 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101762, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3418236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759811 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101743, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759816 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101762, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3418236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759820 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101773, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3448236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759831 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1101771, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3438237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759836 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101762, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3418236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759844 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1101771, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3438237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759848 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101766, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3422203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759856 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101762, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3418236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759860 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1101749, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3401284, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.759865 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1101771, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3438237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759873 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101756, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3404243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759878 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1101771, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3438237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759882 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101766, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3422203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759890 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101743, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759898 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1101771, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3438237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759902 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101766, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3422203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759907 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101780, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3463728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759915 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101766, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3422203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759920 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101756, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3404243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759924 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101766, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3422203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759932 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101762, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3418236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759940 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101780, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3463728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759944 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101756, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3404243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759949 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101739, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3368235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759956 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101756, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3404243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759961 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1101792, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3503442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759966 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101780, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3463728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759974 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1101773, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3448236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.759983 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101739, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3368235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759988 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1101771, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3438237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.759997 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101756, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3404243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760001 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101739, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3368235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760006 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101776, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3458238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760010 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101780, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3463728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760018 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1101792, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3503442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760035 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1101792, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3503442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760040 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101746, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760049 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101766, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3422203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760053 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101780, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3463728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760058 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101776, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3458238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760063 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101739, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3368235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760074 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1101740, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.337229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760085 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101746, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760092 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1101743, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760104 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101776, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3458238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760112 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101739, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3368235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760120 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1101740, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.337229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760127 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1101792, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3503442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760474 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101746, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760501 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101756, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3404243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760514 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1101792, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3503442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760518 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1101740, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.337229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760523 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101768, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.343307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760529 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101768, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.343307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760537 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101768, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.343307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760549 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1101762, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3418236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760557 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101780, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3463728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760565 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101767, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3429134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760570 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101776, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3458238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760575 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101776, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3458238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760579 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101739, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3368235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760584 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101767, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3429134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760592 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101767, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3429134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760600 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101791, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3498237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760608 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:10:05.760614 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101746, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760618 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101746, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760623 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1101792, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3503442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760627 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101791, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3498237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760632 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:05.760636 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101791, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3498237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760641 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:05.760648 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1101740, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.337229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760658 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1101771, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3438237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760663 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101776, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3458238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760667 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101746, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760672 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101768, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.343307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760676 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1101740, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.337229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760683 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101767, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3429134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760695 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1101766, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3422203, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760714 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1101740, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.337229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760721 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101791, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3498237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760728 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:10:05.760735 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101768, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.343307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760743 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101768, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.343307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760748 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101767, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3429134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760752 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1101756, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3404243, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760759 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101767, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3429134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760771 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101791, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3498237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760776 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:05.760780 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101791, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3498237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-10 01:10:05.760785 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:10:05.760789 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101780, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3463728, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760794 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101739, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3368235, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760798 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1101792, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3503442, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760802 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1101776, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3458238, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760815 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1101746, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3378236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760823 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1101740, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.337229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760827 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1101768, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.343307, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760832 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1101767, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3429134, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760836 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1101791, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3498237, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-10 01:10:05.760841 | orchestrator | 2026-03-10 01:10:05.760845 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-10 01:10:05.760850 | orchestrator | Tuesday 10 March 2026 01:06:51 +0000 (0:00:42.816) 0:01:12.719 ********* 2026-03-10 01:10:05.760854 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 01:10:05.760859 | orchestrator | 2026-03-10 01:10:05.760863 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-10 01:10:05.760867 | orchestrator | Tuesday 10 March 2026 01:06:52 +0000 (0:00:01.420) 0:01:14.139 ********* 2026-03-10 01:10:05.760872 | orchestrator | [WARNING]: Skipped 2026-03-10 01:10:05.760877 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.760881 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-10 01:10:05.760892 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.760896 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-10 01:10:05.760900 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 01:10:05.760905 | orchestrator | [WARNING]: Skipped 2026-03-10 01:10:05.760909 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.760913 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-10 01:10:05.760918 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.760922 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-10 01:10:05.760927 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:10:05.760931 | orchestrator | [WARNING]: Skipped 2026-03-10 01:10:05.760935 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.760940 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-10 01:10:05.760944 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.760950 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-10 01:10:05.760955 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-10 01:10:05.760959 | orchestrator | [WARNING]: Skipped 2026-03-10 01:10:05.760964 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.760968 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-10 01:10:05.760972 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.760977 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-10 01:10:05.760981 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-10 01:10:05.760985 | orchestrator | [WARNING]: Skipped 2026-03-10 01:10:05.760990 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.760994 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-10 01:10:05.760998 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.761006 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-10 01:10:05.761010 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-10 01:10:05.761015 | orchestrator | [WARNING]: Skipped 2026-03-10 01:10:05.761019 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.761023 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-10 01:10:05.761028 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.761032 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-10 01:10:05.761036 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-10 01:10:05.761040 | orchestrator | [WARNING]: Skipped 2026-03-10 01:10:05.761045 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.761051 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-10 01:10:05.761058 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-10 01:10:05.761065 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-10 01:10:05.761072 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-10 01:10:05.761079 | orchestrator | 2026-03-10 01:10:05.761086 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-10 01:10:05.761093 | orchestrator | Tuesday 10 March 2026 01:06:59 +0000 (0:00:06.813) 0:01:20.953 ********* 2026-03-10 01:10:05.761098 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-10 01:10:05.761103 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:05.761108 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-10 01:10:05.761119 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:05.761124 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-10 01:10:05.761129 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:05.761134 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-10 01:10:05.761139 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:10:05.761144 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-10 01:10:05.761149 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:10:05.761154 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-10 01:10:05.761159 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:10:05.761164 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-10 01:10:05.761169 | orchestrator | 2026-03-10 01:10:05.761174 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-10 01:10:05.761179 | orchestrator | Tuesday 10 March 2026 01:07:34 +0000 (0:00:35.315) 0:01:56.268 ********* 2026-03-10 01:10:05.761184 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-10 01:10:05.761190 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-10 01:10:05.761195 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:05.761200 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:05.761205 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-10 01:10:05.761210 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:05.761215 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-10 01:10:05.761220 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:10:05.761225 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-10 01:10:05.761230 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:10:05.761235 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-10 01:10:05.761240 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:10:05.761245 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-10 01:10:05.761250 | orchestrator | 2026-03-10 01:10:05.761255 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-10 01:10:05.761259 | orchestrator | Tuesday 10 March 2026 01:07:40 +0000 (0:00:05.507) 0:02:01.776 ********* 2026-03-10 01:10:05.761265 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-10 01:10:05.761273 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-10 01:10:05.761299 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-10 01:10:05.761308 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:05.761316 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-10 01:10:05.761323 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:05.761330 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:05.761359 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-10 01:10:05.761369 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:10:05.761374 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-10 01:10:05.761384 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:10:05.761389 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-10 01:10:05.761394 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:10:05.761399 | orchestrator | 2026-03-10 01:10:05.761404 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-10 01:10:05.761409 | orchestrator | Tuesday 10 March 2026 01:07:44 +0000 (0:00:04.254) 0:02:06.030 ********* 2026-03-10 01:10:05.761415 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 01:10:05.761420 | orchestrator | 2026-03-10 01:10:05.761425 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-10 01:10:05.761430 | orchestrator | Tuesday 10 March 2026 01:07:45 +0000 (0:00:01.022) 0:02:07.052 ********* 2026-03-10 01:10:05.761435 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:10:05.761440 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:05.761445 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:05.761450 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:05.761455 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:10:05.761459 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:10:05.761463 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:10:05.761467 | orchestrator | 2026-03-10 01:10:05.761472 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-10 01:10:05.761476 | orchestrator | Tuesday 10 March 2026 01:07:47 +0000 (0:00:01.596) 0:02:08.649 ********* 2026-03-10 01:10:05.761480 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:10:05.761485 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:10:05.761489 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:10:05.761493 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:05.761497 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:10:05.761502 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:05.761506 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:05.761510 | orchestrator | 2026-03-10 01:10:05.761514 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-10 01:10:05.761519 | orchestrator | Tuesday 10 March 2026 01:07:50 +0000 (0:00:03.439) 0:02:12.088 ********* 2026-03-10 01:10:05.761523 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:10:05.761527 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:10:05.761532 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:10:05.761536 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:05.761540 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:10:05.761544 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:05.761549 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:10:05.761553 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:05.761557 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:10:05.761562 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:10:05.761566 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:10:05.761570 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:10:05.761574 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-10 01:10:05.761579 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:10:05.761583 | orchestrator | 2026-03-10 01:10:05.761587 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-10 01:10:05.761592 | orchestrator | Tuesday 10 March 2026 01:07:53 +0000 (0:00:02.769) 0:02:14.858 ********* 2026-03-10 01:10:05.761596 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-10 01:10:05.761604 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:05.761609 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-10 01:10:05.761613 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:05.761617 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-10 01:10:05.761622 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-10 01:10:05.761626 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:05.761631 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:10:05.761638 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-10 01:10:05.761643 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-10 01:10:05.761647 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:10:05.761651 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-10 01:10:05.761656 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:10:05.761660 | orchestrator | 2026-03-10 01:10:05.761664 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-10 01:10:05.761669 | orchestrator | Tuesday 10 March 2026 01:07:56 +0000 (0:00:02.864) 0:02:17.722 ********* 2026-03-10 01:10:05.761673 | orchestrator | [WARNING]: Skipped 2026-03-10 01:10:05.761677 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-10 01:10:05.761684 | orchestrator | due to this access issue: 2026-03-10 01:10:05.761689 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-10 01:10:05.761693 | orchestrator | not a directory 2026-03-10 01:10:05.761698 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-10 01:10:05.761702 | orchestrator | 2026-03-10 01:10:05.761706 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-10 01:10:05.761711 | orchestrator | Tuesday 10 March 2026 01:07:58 +0000 (0:00:02.224) 0:02:19.948 ********* 2026-03-10 01:10:05.761715 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:10:05.761719 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:05.761724 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:05.761728 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:05.761732 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:10:05.761736 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:10:05.761741 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:10:05.761745 | orchestrator | 2026-03-10 01:10:05.761749 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-10 01:10:05.761753 | orchestrator | Tuesday 10 March 2026 01:07:59 +0000 (0:00:01.392) 0:02:21.340 ********* 2026-03-10 01:10:05.761758 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:10:05.761762 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:05.761766 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:05.761771 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:05.761775 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:10:05.761779 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:10:05.761783 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:10:05.761788 | orchestrator | 2026-03-10 01:10:05.761792 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-10 01:10:05.761796 | orchestrator | Tuesday 10 March 2026 01:08:01 +0000 (0:00:01.907) 0:02:23.248 ********* 2026-03-10 01:10:05.761801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.761813 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-10 01:10:05.761820 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.761832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.761844 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.761856 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.761863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.761870 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.761883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.761891 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.761898 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-10 01:10:05.761909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.761916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.761929 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.761937 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.761944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.761957 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.761964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.761971 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.761982 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.761991 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.762002 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.762007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.762050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.762056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-10 01:10:05.762061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.762069 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-10 01:10:05.762075 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.762083 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-10 01:10:05.762088 | orchestrator | 2026-03-10 01:10:05.762093 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-10 01:10:05.762097 | orchestrator | Tuesday 10 March 2026 01:08:08 +0000 (0:00:06.739) 0:02:29.988 ********* 2026-03-10 01:10:05.762107 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-10 01:10:05.762111 | orchestrator | skipping: [testbed-manager] 2026-03-10 01:10:05.762116 | orchestrator | 2026-03-10 01:10:05.762120 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:10:05.762124 | orchestrator | Tuesday 10 March 2026 01:08:11 +0000 (0:00:02.685) 0:02:32.673 ********* 2026-03-10 01:10:05.762129 | orchestrator | 2026-03-10 01:10:05.762133 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:10:05.762137 | orchestrator | Tuesday 10 March 2026 01:08:11 +0000 (0:00:00.160) 0:02:32.834 ********* 2026-03-10 01:10:05.762142 | orchestrator | 2026-03-10 01:10:05.762146 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:10:05.762150 | orchestrator | Tuesday 10 March 2026 01:08:11 +0000 (0:00:00.227) 0:02:33.062 ********* 2026-03-10 01:10:05.762154 | orchestrator | 2026-03-10 01:10:05.762159 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:10:05.762163 | orchestrator | Tuesday 10 March 2026 01:08:11 +0000 (0:00:00.152) 0:02:33.214 ********* 2026-03-10 01:10:05.762167 | orchestrator | 2026-03-10 01:10:05.762172 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:10:05.762176 | orchestrator | Tuesday 10 March 2026 01:08:12 +0000 (0:00:00.283) 0:02:33.498 ********* 2026-03-10 01:10:05.762180 | orchestrator | 2026-03-10 01:10:05.762185 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:10:05.762189 | orchestrator | Tuesday 10 March 2026 01:08:12 +0000 (0:00:00.099) 0:02:33.597 ********* 2026-03-10 01:10:05.762193 | orchestrator | 2026-03-10 01:10:05.762198 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-10 01:10:05.762202 | orchestrator | Tuesday 10 March 2026 01:08:12 +0000 (0:00:00.069) 0:02:33.667 ********* 2026-03-10 01:10:05.762206 | orchestrator | 2026-03-10 01:10:05.762211 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-10 01:10:05.762215 | orchestrator | Tuesday 10 March 2026 01:08:12 +0000 (0:00:00.096) 0:02:33.763 ********* 2026-03-10 01:10:05.762219 | orchestrator | changed: [testbed-manager] 2026-03-10 01:10:05.762224 | orchestrator | 2026-03-10 01:10:05.762228 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-10 01:10:05.762232 | orchestrator | Tuesday 10 March 2026 01:08:30 +0000 (0:00:17.860) 0:02:51.623 ********* 2026-03-10 01:10:05.762236 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:10:05.762241 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:05.762245 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:05.762249 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:10:05.762253 | orchestrator | changed: [testbed-manager] 2026-03-10 01:10:05.762258 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:10:05.762262 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:05.762266 | orchestrator | 2026-03-10 01:10:05.762271 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-10 01:10:05.762275 | orchestrator | Tuesday 10 March 2026 01:08:47 +0000 (0:00:17.654) 0:03:09.278 ********* 2026-03-10 01:10:05.762295 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:05.762301 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:05.762305 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:05.762309 | orchestrator | 2026-03-10 01:10:05.762314 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-10 01:10:05.762318 | orchestrator | Tuesday 10 March 2026 01:08:59 +0000 (0:00:11.214) 0:03:20.493 ********* 2026-03-10 01:10:05.762322 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:05.762326 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:05.762331 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:05.762335 | orchestrator | 2026-03-10 01:10:05.762339 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-10 01:10:05.762344 | orchestrator | Tuesday 10 March 2026 01:09:10 +0000 (0:00:11.003) 0:03:31.497 ********* 2026-03-10 01:10:05.762352 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:10:05.762356 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:05.762360 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:05.762364 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:10:05.762372 | orchestrator | changed: [testbed-manager] 2026-03-10 01:10:05.762376 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:05.762380 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:10:05.762385 | orchestrator | 2026-03-10 01:10:05.762389 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-10 01:10:05.762393 | orchestrator | Tuesday 10 March 2026 01:09:27 +0000 (0:00:16.902) 0:03:48.399 ********* 2026-03-10 01:10:05.762397 | orchestrator | changed: [testbed-manager] 2026-03-10 01:10:05.762402 | orchestrator | 2026-03-10 01:10:05.762406 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-10 01:10:05.762410 | orchestrator | Tuesday 10 March 2026 01:09:35 +0000 (0:00:08.205) 0:03:56.605 ********* 2026-03-10 01:10:05.762415 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:05.762419 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:05.762423 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:05.762427 | orchestrator | 2026-03-10 01:10:05.762432 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-10 01:10:05.762439 | orchestrator | Tuesday 10 March 2026 01:09:46 +0000 (0:00:11.215) 0:04:07.820 ********* 2026-03-10 01:10:05.762443 | orchestrator | changed: [testbed-manager] 2026-03-10 01:10:05.762447 | orchestrator | 2026-03-10 01:10:05.762452 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-10 01:10:05.762456 | orchestrator | Tuesday 10 March 2026 01:09:52 +0000 (0:00:05.571) 0:04:13.392 ********* 2026-03-10 01:10:05.762460 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:10:05.762465 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:10:05.762469 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:10:05.762473 | orchestrator | 2026-03-10 01:10:05.762477 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:10:05.762482 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-10 01:10:05.762487 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-10 01:10:05.762491 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-10 01:10:05.762496 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-10 01:10:05.762500 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-10 01:10:05.762504 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-10 01:10:05.762509 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-10 01:10:05.762513 | orchestrator | 2026-03-10 01:10:05.762517 | orchestrator | 2026-03-10 01:10:05.762521 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:10:05.762526 | orchestrator | Tuesday 10 March 2026 01:10:02 +0000 (0:00:10.496) 0:04:23.888 ********* 2026-03-10 01:10:05.762530 | orchestrator | =============================================================================== 2026-03-10 01:10:05.762534 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 42.82s 2026-03-10 01:10:05.762539 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 35.32s 2026-03-10 01:10:05.762547 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 17.87s 2026-03-10 01:10:05.762551 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 17.64s 2026-03-10 01:10:05.762556 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 16.90s 2026-03-10 01:10:05.762563 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 11.21s 2026-03-10 01:10:05.762570 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 11.21s 2026-03-10 01:10:05.762576 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 11.00s 2026-03-10 01:10:05.762583 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 10.50s 2026-03-10 01:10:05.762590 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.21s 2026-03-10 01:10:05.762597 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 6.81s 2026-03-10 01:10:05.762604 | orchestrator | prometheus : Check prometheus containers -------------------------------- 6.74s 2026-03-10 01:10:05.762611 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.58s 2026-03-10 01:10:05.762618 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.23s 2026-03-10 01:10:05.762624 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.57s 2026-03-10 01:10:05.762630 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 5.51s 2026-03-10 01:10:05.762636 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.32s 2026-03-10 01:10:05.762643 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 4.25s 2026-03-10 01:10:05.762649 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.44s 2026-03-10 01:10:05.762656 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 2.86s 2026-03-10 01:10:05.762667 | orchestrator | 2026-03-10 01:10:05 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:05.762674 | orchestrator | 2026-03-10 01:10:05 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:05.763251 | orchestrator | 2026-03-10 01:10:05 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:05.765355 | orchestrator | 2026-03-10 01:10:05 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:05.765412 | orchestrator | 2026-03-10 01:10:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:08.825339 | orchestrator | 2026-03-10 01:10:08 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:08.826173 | orchestrator | 2026-03-10 01:10:08 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:08.827560 | orchestrator | 2026-03-10 01:10:08 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:08.829425 | orchestrator | 2026-03-10 01:10:08 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:08.829595 | orchestrator | 2026-03-10 01:10:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:11.882701 | orchestrator | 2026-03-10 01:10:11 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:11.885312 | orchestrator | 2026-03-10 01:10:11 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:11.887315 | orchestrator | 2026-03-10 01:10:11 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:11.889521 | orchestrator | 2026-03-10 01:10:11 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:11.889879 | orchestrator | 2026-03-10 01:10:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:14.927309 | orchestrator | 2026-03-10 01:10:14 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:14.929964 | orchestrator | 2026-03-10 01:10:14 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:14.931155 | orchestrator | 2026-03-10 01:10:14 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:14.933978 | orchestrator | 2026-03-10 01:10:14 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:14.934005 | orchestrator | 2026-03-10 01:10:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:17.991973 | orchestrator | 2026-03-10 01:10:17 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:17.994678 | orchestrator | 2026-03-10 01:10:17 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:17.997677 | orchestrator | 2026-03-10 01:10:17 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:17.999902 | orchestrator | 2026-03-10 01:10:18 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:17.999968 | orchestrator | 2026-03-10 01:10:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:21.040258 | orchestrator | 2026-03-10 01:10:21 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:21.043507 | orchestrator | 2026-03-10 01:10:21 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:21.045141 | orchestrator | 2026-03-10 01:10:21 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:21.048460 | orchestrator | 2026-03-10 01:10:21 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:21.048518 | orchestrator | 2026-03-10 01:10:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:24.108923 | orchestrator | 2026-03-10 01:10:24 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:24.109442 | orchestrator | 2026-03-10 01:10:24 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:24.110520 | orchestrator | 2026-03-10 01:10:24 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:24.113106 | orchestrator | 2026-03-10 01:10:24 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:24.113151 | orchestrator | 2026-03-10 01:10:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:27.152935 | orchestrator | 2026-03-10 01:10:27 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:27.153548 | orchestrator | 2026-03-10 01:10:27 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:27.155249 | orchestrator | 2026-03-10 01:10:27 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:27.156796 | orchestrator | 2026-03-10 01:10:27 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:27.156849 | orchestrator | 2026-03-10 01:10:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:30.197354 | orchestrator | 2026-03-10 01:10:30 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:30.199751 | orchestrator | 2026-03-10 01:10:30 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:30.202134 | orchestrator | 2026-03-10 01:10:30 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:30.203985 | orchestrator | 2026-03-10 01:10:30 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:30.204050 | orchestrator | 2026-03-10 01:10:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:33.247245 | orchestrator | 2026-03-10 01:10:33 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:33.249706 | orchestrator | 2026-03-10 01:10:33 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:33.251308 | orchestrator | 2026-03-10 01:10:33 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:33.252719 | orchestrator | 2026-03-10 01:10:33 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:33.252786 | orchestrator | 2026-03-10 01:10:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:36.288555 | orchestrator | 2026-03-10 01:10:36 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:36.289079 | orchestrator | 2026-03-10 01:10:36 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:36.290423 | orchestrator | 2026-03-10 01:10:36 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:36.291589 | orchestrator | 2026-03-10 01:10:36 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:36.291658 | orchestrator | 2026-03-10 01:10:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:39.340642 | orchestrator | 2026-03-10 01:10:39 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:39.341368 | orchestrator | 2026-03-10 01:10:39 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:39.343727 | orchestrator | 2026-03-10 01:10:39 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:39.345460 | orchestrator | 2026-03-10 01:10:39 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:39.345766 | orchestrator | 2026-03-10 01:10:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:42.383356 | orchestrator | 2026-03-10 01:10:42 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:42.383743 | orchestrator | 2026-03-10 01:10:42 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:42.385641 | orchestrator | 2026-03-10 01:10:42 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:42.386605 | orchestrator | 2026-03-10 01:10:42 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:42.386634 | orchestrator | 2026-03-10 01:10:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:45.425220 | orchestrator | 2026-03-10 01:10:45 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:45.426978 | orchestrator | 2026-03-10 01:10:45 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:45.428528 | orchestrator | 2026-03-10 01:10:45 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:45.430151 | orchestrator | 2026-03-10 01:10:45 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:45.430198 | orchestrator | 2026-03-10 01:10:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:48.473015 | orchestrator | 2026-03-10 01:10:48 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:48.473782 | orchestrator | 2026-03-10 01:10:48 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:48.475912 | orchestrator | 2026-03-10 01:10:48 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:48.476970 | orchestrator | 2026-03-10 01:10:48 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:48.477008 | orchestrator | 2026-03-10 01:10:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:51.534330 | orchestrator | 2026-03-10 01:10:51 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state STARTED 2026-03-10 01:10:51.535070 | orchestrator | 2026-03-10 01:10:51 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:51.537589 | orchestrator | 2026-03-10 01:10:51 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:51.538755 | orchestrator | 2026-03-10 01:10:51 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:51.538804 | orchestrator | 2026-03-10 01:10:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:54.577033 | orchestrator | 2026-03-10 01:10:54 | INFO  | Task 532e4efc-644c-4e5d-9a37-384547680560 is in state SUCCESS 2026-03-10 01:10:54.578326 | orchestrator | 2026-03-10 01:10:54.578374 | orchestrator | 2026-03-10 01:10:54.578384 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:10:54.578393 | orchestrator | 2026-03-10 01:10:54.578400 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:10:54.578409 | orchestrator | Tuesday 10 March 2026 01:09:39 +0000 (0:00:00.347) 0:00:00.347 ********* 2026-03-10 01:10:54.578416 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:10:54.578424 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:10:54.578431 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:10:54.578438 | orchestrator | 2026-03-10 01:10:54.578445 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:10:54.578452 | orchestrator | Tuesday 10 March 2026 01:09:39 +0000 (0:00:00.336) 0:00:00.683 ********* 2026-03-10 01:10:54.578461 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-10 01:10:54.578468 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-10 01:10:54.578475 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-10 01:10:54.578482 | orchestrator | 2026-03-10 01:10:54.578489 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-10 01:10:54.578497 | orchestrator | 2026-03-10 01:10:54.578504 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-10 01:10:54.578511 | orchestrator | Tuesday 10 March 2026 01:09:40 +0000 (0:00:00.503) 0:00:01.186 ********* 2026-03-10 01:10:54.578519 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:10:54.578528 | orchestrator | 2026-03-10 01:10:54.578535 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-10 01:10:54.578543 | orchestrator | Tuesday 10 March 2026 01:09:40 +0000 (0:00:00.666) 0:00:01.853 ********* 2026-03-10 01:10:54.578573 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-10 01:10:54.578581 | orchestrator | 2026-03-10 01:10:54.578588 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-10 01:10:54.578596 | orchestrator | Tuesday 10 March 2026 01:09:44 +0000 (0:00:03.785) 0:00:05.638 ********* 2026-03-10 01:10:54.578603 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-10 01:10:54.578612 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-10 01:10:54.578618 | orchestrator | 2026-03-10 01:10:54.578625 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-10 01:10:54.578632 | orchestrator | Tuesday 10 March 2026 01:09:51 +0000 (0:00:06.731) 0:00:12.369 ********* 2026-03-10 01:10:54.578660 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:10:54.578667 | orchestrator | 2026-03-10 01:10:54.578674 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-10 01:10:54.578681 | orchestrator | Tuesday 10 March 2026 01:09:54 +0000 (0:00:03.369) 0:00:15.739 ********* 2026-03-10 01:10:54.578688 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-10 01:10:54.578695 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:10:54.578702 | orchestrator | 2026-03-10 01:10:54.578708 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-10 01:10:54.578715 | orchestrator | Tuesday 10 March 2026 01:09:59 +0000 (0:00:04.235) 0:00:19.974 ********* 2026-03-10 01:10:54.578722 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:10:54.578729 | orchestrator | 2026-03-10 01:10:54.578735 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-10 01:10:54.578742 | orchestrator | Tuesday 10 March 2026 01:10:02 +0000 (0:00:03.690) 0:00:23.664 ********* 2026-03-10 01:10:54.578748 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-10 01:10:54.578754 | orchestrator | 2026-03-10 01:10:54.578761 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-10 01:10:54.578767 | orchestrator | Tuesday 10 March 2026 01:10:06 +0000 (0:00:04.090) 0:00:27.755 ********* 2026-03-10 01:10:54.578774 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:54.578781 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:54.578789 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:54.578797 | orchestrator | 2026-03-10 01:10:54.578805 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-10 01:10:54.578812 | orchestrator | Tuesday 10 March 2026 01:10:07 +0000 (0:00:00.369) 0:00:28.125 ********* 2026-03-10 01:10:54.578850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.578878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.578886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.578902 | orchestrator | 2026-03-10 01:10:54.578910 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-10 01:10:54.578917 | orchestrator | Tuesday 10 March 2026 01:10:08 +0000 (0:00:00.995) 0:00:29.121 ********* 2026-03-10 01:10:54.578924 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:54.578931 | orchestrator | 2026-03-10 01:10:54.578938 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-10 01:10:54.578945 | orchestrator | Tuesday 10 March 2026 01:10:08 +0000 (0:00:00.145) 0:00:29.266 ********* 2026-03-10 01:10:54.578952 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:54.578959 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:54.578966 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:54.578972 | orchestrator | 2026-03-10 01:10:54.578980 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-10 01:10:54.578987 | orchestrator | Tuesday 10 March 2026 01:10:08 +0000 (0:00:00.560) 0:00:29.827 ********* 2026-03-10 01:10:54.578994 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:10:54.579002 | orchestrator | 2026-03-10 01:10:54.579008 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-10 01:10:54.579015 | orchestrator | Tuesday 10 March 2026 01:10:09 +0000 (0:00:00.503) 0:00:30.330 ********* 2026-03-10 01:10:54.579024 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.579046 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.579054 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.579067 | orchestrator | 2026-03-10 01:10:54.579074 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-10 01:10:54.579080 | orchestrator | Tuesday 10 March 2026 01:10:10 +0000 (0:00:01.447) 0:00:31.778 ********* 2026-03-10 01:10:54.579088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:10:54.579096 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:54.579103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:10:54.579110 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:54.579126 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:10:54.579134 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:54.579141 | orchestrator | 2026-03-10 01:10:54.579148 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-10 01:10:54.579160 | orchestrator | Tuesday 10 March 2026 01:10:11 +0000 (0:00:00.661) 0:00:32.439 ********* 2026-03-10 01:10:54.579167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:10:54.579174 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:54.579181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:10:54.579188 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:54.579195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:10:54.579202 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:54.579209 | orchestrator | 2026-03-10 01:10:54.579216 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-10 01:10:54.579223 | orchestrator | Tuesday 10 March 2026 01:10:12 +0000 (0:00:00.726) 0:00:33.166 ********* 2026-03-10 01:10:54.579241 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.579256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.579285 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.579292 | orchestrator | 2026-03-10 01:10:54.579300 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-10 01:10:54.579307 | orchestrator | Tuesday 10 March 2026 01:10:13 +0000 (0:00:01.399) 0:00:34.565 ********* 2026-03-10 01:10:54.579368 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.579381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.579402 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.579410 | orchestrator | 2026-03-10 01:10:54.579417 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-10 01:10:54.579425 | orchestrator | Tuesday 10 March 2026 01:10:16 +0000 (0:00:03.165) 0:00:37.731 ********* 2026-03-10 01:10:54.579430 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-10 01:10:54.579436 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-10 01:10:54.579443 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-10 01:10:54.579449 | orchestrator | 2026-03-10 01:10:54.579456 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-10 01:10:54.579463 | orchestrator | Tuesday 10 March 2026 01:10:18 +0000 (0:00:01.573) 0:00:39.305 ********* 2026-03-10 01:10:54.579469 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:54.579477 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:54.579484 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:54.579491 | orchestrator | 2026-03-10 01:10:54.579498 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-10 01:10:54.579505 | orchestrator | Tuesday 10 March 2026 01:10:19 +0000 (0:00:01.415) 0:00:40.720 ********* 2026-03-10 01:10:54.579513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:10:54.579520 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:10:54.579528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:10:54.579541 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:10:54.579556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-10 01:10:54.579564 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:10:54.579571 | orchestrator | 2026-03-10 01:10:54.579578 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-10 01:10:54.579584 | orchestrator | Tuesday 10 March 2026 01:10:20 +0000 (0:00:00.560) 0:00:41.281 ********* 2026-03-10 01:10:54.579592 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.579598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.579605 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-10 01:10:54.579619 | orchestrator | 2026-03-10 01:10:54.579625 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-10 01:10:54.579631 | orchestrator | Tuesday 10 March 2026 01:10:21 +0000 (0:00:01.185) 0:00:42.467 ********* 2026-03-10 01:10:54.579637 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:54.579644 | orchestrator | 2026-03-10 01:10:54.579650 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-10 01:10:54.579657 | orchestrator | Tuesday 10 March 2026 01:10:24 +0000 (0:00:03.122) 0:00:45.589 ********* 2026-03-10 01:10:54.579664 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:54.579671 | orchestrator | 2026-03-10 01:10:54.579679 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-10 01:10:54.579695 | orchestrator | Tuesday 10 March 2026 01:10:27 +0000 (0:00:02.737) 0:00:48.327 ********* 2026-03-10 01:10:54.579702 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:54.579709 | orchestrator | 2026-03-10 01:10:54.579716 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-10 01:10:54.579723 | orchestrator | Tuesday 10 March 2026 01:10:41 +0000 (0:00:14.113) 0:01:02.441 ********* 2026-03-10 01:10:54.579730 | orchestrator | 2026-03-10 01:10:54.579737 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-10 01:10:54.579744 | orchestrator | Tuesday 10 March 2026 01:10:41 +0000 (0:00:00.069) 0:01:02.510 ********* 2026-03-10 01:10:54.579751 | orchestrator | 2026-03-10 01:10:54.579763 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-10 01:10:54.579770 | orchestrator | Tuesday 10 March 2026 01:10:41 +0000 (0:00:00.087) 0:01:02.597 ********* 2026-03-10 01:10:54.579777 | orchestrator | 2026-03-10 01:10:54.579784 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-10 01:10:54.579791 | orchestrator | Tuesday 10 March 2026 01:10:41 +0000 (0:00:00.073) 0:01:02.670 ********* 2026-03-10 01:10:54.579799 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:10:54.579806 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:10:54.579813 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:10:54.579820 | orchestrator | 2026-03-10 01:10:54.579827 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:10:54.579835 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 01:10:54.579844 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 01:10:54.579852 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 01:10:54.579858 | orchestrator | 2026-03-10 01:10:54.579866 | orchestrator | 2026-03-10 01:10:54.579873 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:10:54.579880 | orchestrator | Tuesday 10 March 2026 01:10:52 +0000 (0:00:11.122) 0:01:13.793 ********* 2026-03-10 01:10:54.579888 | orchestrator | =============================================================================== 2026-03-10 01:10:54.579893 | orchestrator | placement : Running placement bootstrap container ---------------------- 14.11s 2026-03-10 01:10:54.579897 | orchestrator | placement : Restart placement-api container ---------------------------- 11.12s 2026-03-10 01:10:54.579901 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.73s 2026-03-10 01:10:54.579906 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.24s 2026-03-10 01:10:54.579910 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.09s 2026-03-10 01:10:54.579914 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.79s 2026-03-10 01:10:54.579923 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.69s 2026-03-10 01:10:54.579927 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.37s 2026-03-10 01:10:54.579932 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.16s 2026-03-10 01:10:54.579936 | orchestrator | placement : Creating placement databases -------------------------------- 3.12s 2026-03-10 01:10:54.579940 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.74s 2026-03-10 01:10:54.579944 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.57s 2026-03-10 01:10:54.579949 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.45s 2026-03-10 01:10:54.579953 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.42s 2026-03-10 01:10:54.579957 | orchestrator | placement : Copying over config.json files for services ----------------- 1.40s 2026-03-10 01:10:54.579961 | orchestrator | placement : Check placement containers ---------------------------------- 1.19s 2026-03-10 01:10:54.579966 | orchestrator | placement : Ensuring config directories exist --------------------------- 1.00s 2026-03-10 01:10:54.579970 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.73s 2026-03-10 01:10:54.579974 | orchestrator | placement : include_tasks ----------------------------------------------- 0.67s 2026-03-10 01:10:54.579978 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.66s 2026-03-10 01:10:54.579983 | orchestrator | 2026-03-10 01:10:54 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:54.580068 | orchestrator | 2026-03-10 01:10:54 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state STARTED 2026-03-10 01:10:54.580589 | orchestrator | 2026-03-10 01:10:54 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:54.582425 | orchestrator | 2026-03-10 01:10:54 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:54.582451 | orchestrator | 2026-03-10 01:10:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:10:57.634944 | orchestrator | 2026-03-10 01:10:57 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:10:57.640617 | orchestrator | 2026-03-10 01:10:57 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state STARTED 2026-03-10 01:10:57.643902 | orchestrator | 2026-03-10 01:10:57 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:10:57.648736 | orchestrator | 2026-03-10 01:10:57 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:10:57.648800 | orchestrator | 2026-03-10 01:10:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:00.677717 | orchestrator | 2026-03-10 01:11:00 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:00.677862 | orchestrator | 2026-03-10 01:11:00 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state STARTED 2026-03-10 01:11:00.677891 | orchestrator | 2026-03-10 01:11:00 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:00.678392 | orchestrator | 2026-03-10 01:11:00 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:00.678425 | orchestrator | 2026-03-10 01:11:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:03.711226 | orchestrator | 2026-03-10 01:11:03 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:03.712269 | orchestrator | 2026-03-10 01:11:03 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state STARTED 2026-03-10 01:11:03.713307 | orchestrator | 2026-03-10 01:11:03 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:03.714809 | orchestrator | 2026-03-10 01:11:03 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:03.714884 | orchestrator | 2026-03-10 01:11:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:06.750153 | orchestrator | 2026-03-10 01:11:06 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:06.750346 | orchestrator | 2026-03-10 01:11:06 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state STARTED 2026-03-10 01:11:06.750999 | orchestrator | 2026-03-10 01:11:06 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:06.752232 | orchestrator | 2026-03-10 01:11:06 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:06.752583 | orchestrator | 2026-03-10 01:11:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:09.794865 | orchestrator | 2026-03-10 01:11:09 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:09.795822 | orchestrator | 2026-03-10 01:11:09 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state STARTED 2026-03-10 01:11:09.796995 | orchestrator | 2026-03-10 01:11:09 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:09.800169 | orchestrator | 2026-03-10 01:11:09 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:09.800433 | orchestrator | 2026-03-10 01:11:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:12.857560 | orchestrator | 2026-03-10 01:11:12 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:12.859058 | orchestrator | 2026-03-10 01:11:12 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state STARTED 2026-03-10 01:11:12.861592 | orchestrator | 2026-03-10 01:11:12 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:12.864968 | orchestrator | 2026-03-10 01:11:12 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:12.865029 | orchestrator | 2026-03-10 01:11:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:15.914140 | orchestrator | 2026-03-10 01:11:15 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:15.919625 | orchestrator | 2026-03-10 01:11:15 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state STARTED 2026-03-10 01:11:15.922924 | orchestrator | 2026-03-10 01:11:15 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:15.923952 | orchestrator | 2026-03-10 01:11:15 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:15.924276 | orchestrator | 2026-03-10 01:11:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:18.972940 | orchestrator | 2026-03-10 01:11:18 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:18.973423 | orchestrator | 2026-03-10 01:11:18 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state STARTED 2026-03-10 01:11:18.975540 | orchestrator | 2026-03-10 01:11:18 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:18.977560 | orchestrator | 2026-03-10 01:11:18 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:18.977787 | orchestrator | 2026-03-10 01:11:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:22.025444 | orchestrator | 2026-03-10 01:11:22 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:22.028023 | orchestrator | 2026-03-10 01:11:22 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state STARTED 2026-03-10 01:11:22.030357 | orchestrator | 2026-03-10 01:11:22 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:22.032269 | orchestrator | 2026-03-10 01:11:22 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:22.032330 | orchestrator | 2026-03-10 01:11:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:25.086324 | orchestrator | 2026-03-10 01:11:25 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:25.088392 | orchestrator | 2026-03-10 01:11:25 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state STARTED 2026-03-10 01:11:25.091135 | orchestrator | 2026-03-10 01:11:25 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:25.093344 | orchestrator | 2026-03-10 01:11:25 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:25.093403 | orchestrator | 2026-03-10 01:11:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:28.134642 | orchestrator | 2026-03-10 01:11:28 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:28.136739 | orchestrator | 2026-03-10 01:11:28 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state STARTED 2026-03-10 01:11:28.136903 | orchestrator | 2026-03-10 01:11:28 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:28.139013 | orchestrator | 2026-03-10 01:11:28 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:28.139061 | orchestrator | 2026-03-10 01:11:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:31.183767 | orchestrator | 2026-03-10 01:11:31 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:31.184435 | orchestrator | 2026-03-10 01:11:31 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state STARTED 2026-03-10 01:11:31.185440 | orchestrator | 2026-03-10 01:11:31 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:31.186633 | orchestrator | 2026-03-10 01:11:31 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:31.186683 | orchestrator | 2026-03-10 01:11:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:34.231767 | orchestrator | 2026-03-10 01:11:34 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:11:34.231934 | orchestrator | 2026-03-10 01:11:34 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:34.234895 | orchestrator | 2026-03-10 01:11:34 | INFO  | Task 36c1dfb8-cd14-486f-8df0-b13f857cef4a is in state SUCCESS 2026-03-10 01:11:34.235810 | orchestrator | 2026-03-10 01:11:34 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:34.237154 | orchestrator | 2026-03-10 01:11:34 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:34.237342 | orchestrator | 2026-03-10 01:11:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:37.292656 | orchestrator | 2026-03-10 01:11:37 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:11:37.293680 | orchestrator | 2026-03-10 01:11:37 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:37.296122 | orchestrator | 2026-03-10 01:11:37 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:37.297026 | orchestrator | 2026-03-10 01:11:37 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:37.297238 | orchestrator | 2026-03-10 01:11:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:40.350603 | orchestrator | 2026-03-10 01:11:40 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:11:40.350728 | orchestrator | 2026-03-10 01:11:40 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:40.352036 | orchestrator | 2026-03-10 01:11:40 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:40.353509 | orchestrator | 2026-03-10 01:11:40 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:40.353550 | orchestrator | 2026-03-10 01:11:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:43.386309 | orchestrator | 2026-03-10 01:11:43 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:11:43.387419 | orchestrator | 2026-03-10 01:11:43 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:43.388151 | orchestrator | 2026-03-10 01:11:43 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:43.389512 | orchestrator | 2026-03-10 01:11:43 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:43.389581 | orchestrator | 2026-03-10 01:11:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:46.427779 | orchestrator | 2026-03-10 01:11:46 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:11:46.428600 | orchestrator | 2026-03-10 01:11:46 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:46.429611 | orchestrator | 2026-03-10 01:11:46 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state STARTED 2026-03-10 01:11:46.430960 | orchestrator | 2026-03-10 01:11:46 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:46.430998 | orchestrator | 2026-03-10 01:11:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:49.475934 | orchestrator | 2026-03-10 01:11:49 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:11:49.479064 | orchestrator | 2026-03-10 01:11:49 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:49.482884 | orchestrator | 2026-03-10 01:11:49 | INFO  | Task 2678be10-b069-41ea-9b29-410b56995c69 is in state SUCCESS 2026-03-10 01:11:49.484806 | orchestrator | 2026-03-10 01:11:49.484855 | orchestrator | 2026-03-10 01:11:49.484861 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:11:49.484867 | orchestrator | 2026-03-10 01:11:49.484872 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:11:49.484877 | orchestrator | Tuesday 10 March 2026 01:10:59 +0000 (0:00:00.299) 0:00:00.299 ********* 2026-03-10 01:11:49.484883 | orchestrator | ok: [testbed-manager] 2026-03-10 01:11:49.484888 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:11:49.484894 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:11:49.484898 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:11:49.484903 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:11:49.484908 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:11:49.484912 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:11:49.484917 | orchestrator | 2026-03-10 01:11:49.484922 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:11:49.484927 | orchestrator | Tuesday 10 March 2026 01:11:00 +0000 (0:00:00.814) 0:00:01.113 ********* 2026-03-10 01:11:49.484932 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-10 01:11:49.484937 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-10 01:11:49.484961 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-10 01:11:49.484966 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-10 01:11:49.484971 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-10 01:11:49.484999 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-10 01:11:49.485022 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-10 01:11:49.485027 | orchestrator | 2026-03-10 01:11:49.485032 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-10 01:11:49.485036 | orchestrator | 2026-03-10 01:11:49.485066 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-10 01:11:49.485071 | orchestrator | Tuesday 10 March 2026 01:11:01 +0000 (0:00:00.713) 0:00:01.826 ********* 2026-03-10 01:11:49.485076 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:11:49.485095 | orchestrator | 2026-03-10 01:11:49.485100 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-10 01:11:49.485105 | orchestrator | Tuesday 10 March 2026 01:11:02 +0000 (0:00:01.443) 0:00:03.269 ********* 2026-03-10 01:11:49.485109 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-10 01:11:49.485156 | orchestrator | 2026-03-10 01:11:49.485161 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-10 01:11:49.485165 | orchestrator | Tuesday 10 March 2026 01:11:06 +0000 (0:00:03.449) 0:00:06.719 ********* 2026-03-10 01:11:49.485171 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-10 01:11:49.485178 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-10 01:11:49.485186 | orchestrator | 2026-03-10 01:11:49.485194 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-10 01:11:49.485217 | orchestrator | Tuesday 10 March 2026 01:11:13 +0000 (0:00:07.090) 0:00:13.809 ********* 2026-03-10 01:11:49.485228 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-10 01:11:49.485377 | orchestrator | 2026-03-10 01:11:49.485390 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-10 01:11:49.485398 | orchestrator | Tuesday 10 March 2026 01:11:16 +0000 (0:00:03.575) 0:00:17.385 ********* 2026-03-10 01:11:49.485406 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-10 01:11:49.485414 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:11:49.485421 | orchestrator | 2026-03-10 01:11:49.485426 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-10 01:11:49.485432 | orchestrator | Tuesday 10 March 2026 01:11:20 +0000 (0:00:04.013) 0:00:21.399 ********* 2026-03-10 01:11:49.485437 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-10 01:11:49.485443 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-10 01:11:49.485448 | orchestrator | 2026-03-10 01:11:49.485453 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-10 01:11:49.485459 | orchestrator | Tuesday 10 March 2026 01:11:27 +0000 (0:00:06.562) 0:00:27.961 ********* 2026-03-10 01:11:49.485464 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-10 01:11:49.485469 | orchestrator | 2026-03-10 01:11:49.485474 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:11:49.485479 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:11:49.485486 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:11:49.485492 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:11:49.485547 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:11:49.485566 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:11:49.485609 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:11:49.485620 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:11:49.485628 | orchestrator | 2026-03-10 01:11:49.485634 | orchestrator | 2026-03-10 01:11:49.485640 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:11:49.485646 | orchestrator | Tuesday 10 March 2026 01:11:32 +0000 (0:00:05.087) 0:00:33.049 ********* 2026-03-10 01:11:49.485654 | orchestrator | =============================================================================== 2026-03-10 01:11:49.485686 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.09s 2026-03-10 01:11:49.485698 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.56s 2026-03-10 01:11:49.485706 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 5.09s 2026-03-10 01:11:49.485715 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 4.01s 2026-03-10 01:11:49.485748 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.58s 2026-03-10 01:11:49.485757 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.45s 2026-03-10 01:11:49.485765 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.44s 2026-03-10 01:11:49.485773 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.81s 2026-03-10 01:11:49.485780 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2026-03-10 01:11:49.485788 | orchestrator | 2026-03-10 01:11:49.485796 | orchestrator | 2026-03-10 01:11:49.485804 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:11:49.485812 | orchestrator | 2026-03-10 01:11:49.485819 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:11:49.485826 | orchestrator | Tuesday 10 March 2026 01:05:40 +0000 (0:00:00.320) 0:00:00.320 ********* 2026-03-10 01:11:49.485835 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:11:49.485843 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:11:49.485850 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:11:49.485876 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:11:49.485886 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:11:49.485894 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:11:49.485902 | orchestrator | 2026-03-10 01:11:49.485910 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:11:49.485915 | orchestrator | Tuesday 10 March 2026 01:05:41 +0000 (0:00:00.952) 0:00:01.272 ********* 2026-03-10 01:11:49.485920 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-10 01:11:49.485924 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-10 01:11:49.485929 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-10 01:11:49.485933 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-10 01:11:49.485938 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-10 01:11:49.485942 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-10 01:11:49.485947 | orchestrator | 2026-03-10 01:11:49.485951 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-10 01:11:49.485956 | orchestrator | 2026-03-10 01:11:49.485960 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-10 01:11:49.485972 | orchestrator | Tuesday 10 March 2026 01:05:42 +0000 (0:00:00.890) 0:00:02.163 ********* 2026-03-10 01:11:49.485985 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:11:49.485990 | orchestrator | 2026-03-10 01:11:49.485994 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-10 01:11:49.485999 | orchestrator | Tuesday 10 March 2026 01:05:44 +0000 (0:00:02.195) 0:00:04.358 ********* 2026-03-10 01:11:49.486003 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:11:49.486008 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:11:49.486048 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:11:49.486054 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:11:49.486059 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:11:49.486063 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:11:49.486068 | orchestrator | 2026-03-10 01:11:49.486072 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-10 01:11:49.486077 | orchestrator | Tuesday 10 March 2026 01:05:47 +0000 (0:00:02.459) 0:00:06.818 ********* 2026-03-10 01:11:49.486081 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:11:49.486086 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:11:49.486090 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:11:49.486095 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:11:49.486099 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:11:49.486104 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:11:49.486108 | orchestrator | 2026-03-10 01:11:49.486113 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-10 01:11:49.486117 | orchestrator | Tuesday 10 March 2026 01:05:48 +0000 (0:00:01.424) 0:00:08.242 ********* 2026-03-10 01:11:49.486122 | orchestrator | ok: [testbed-node-0] => { 2026-03-10 01:11:49.486127 | orchestrator |  "changed": false, 2026-03-10 01:11:49.486132 | orchestrator |  "msg": "All assertions passed" 2026-03-10 01:11:49.486136 | orchestrator | } 2026-03-10 01:11:49.486141 | orchestrator | ok: [testbed-node-1] => { 2026-03-10 01:11:49.486146 | orchestrator |  "changed": false, 2026-03-10 01:11:49.486150 | orchestrator |  "msg": "All assertions passed" 2026-03-10 01:11:49.486155 | orchestrator | } 2026-03-10 01:11:49.486159 | orchestrator | ok: [testbed-node-2] => { 2026-03-10 01:11:49.486164 | orchestrator |  "changed": false, 2026-03-10 01:11:49.486168 | orchestrator |  "msg": "All assertions passed" 2026-03-10 01:11:49.486173 | orchestrator | } 2026-03-10 01:11:49.486177 | orchestrator | ok: [testbed-node-3] => { 2026-03-10 01:11:49.486181 | orchestrator |  "changed": false, 2026-03-10 01:11:49.486186 | orchestrator |  "msg": "All assertions passed" 2026-03-10 01:11:49.486190 | orchestrator | } 2026-03-10 01:11:49.486195 | orchestrator | ok: [testbed-node-4] => { 2026-03-10 01:11:49.486199 | orchestrator |  "changed": false, 2026-03-10 01:11:49.486219 | orchestrator |  "msg": "All assertions passed" 2026-03-10 01:11:49.486224 | orchestrator | } 2026-03-10 01:11:49.486229 | orchestrator | ok: [testbed-node-5] => { 2026-03-10 01:11:49.486282 | orchestrator |  "changed": false, 2026-03-10 01:11:49.486289 | orchestrator |  "msg": "All assertions passed" 2026-03-10 01:11:49.486293 | orchestrator | } 2026-03-10 01:11:49.486298 | orchestrator | 2026-03-10 01:11:49.486303 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-10 01:11:49.486308 | orchestrator | Tuesday 10 March 2026 01:05:49 +0000 (0:00:01.008) 0:00:09.251 ********* 2026-03-10 01:11:49.486312 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.486317 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.486321 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.486326 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.486330 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.486335 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.486340 | orchestrator | 2026-03-10 01:11:49.486344 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-10 01:11:49.486349 | orchestrator | Tuesday 10 March 2026 01:05:50 +0000 (0:00:01.142) 0:00:10.394 ********* 2026-03-10 01:11:49.486360 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-10 01:11:49.486396 | orchestrator | 2026-03-10 01:11:49.486401 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-10 01:11:49.486405 | orchestrator | Tuesday 10 March 2026 01:05:55 +0000 (0:00:04.061) 0:00:14.455 ********* 2026-03-10 01:11:49.486410 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-10 01:11:49.486415 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-10 01:11:49.486420 | orchestrator | 2026-03-10 01:11:49.486424 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-10 01:11:49.486429 | orchestrator | Tuesday 10 March 2026 01:06:02 +0000 (0:00:07.701) 0:00:22.156 ********* 2026-03-10 01:11:49.486433 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:11:49.486438 | orchestrator | 2026-03-10 01:11:49.486442 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-10 01:11:49.486447 | orchestrator | Tuesday 10 March 2026 01:06:06 +0000 (0:00:03.397) 0:00:25.554 ********* 2026-03-10 01:11:49.486451 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-10 01:11:49.486456 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:11:49.486461 | orchestrator | 2026-03-10 01:11:49.486465 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-10 01:11:49.486470 | orchestrator | Tuesday 10 March 2026 01:06:10 +0000 (0:00:04.614) 0:00:30.168 ********* 2026-03-10 01:11:49.486474 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:11:49.486479 | orchestrator | 2026-03-10 01:11:49.486483 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-10 01:11:49.486488 | orchestrator | Tuesday 10 March 2026 01:06:14 +0000 (0:00:04.124) 0:00:34.293 ********* 2026-03-10 01:11:49.486493 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-10 01:11:49.486497 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-10 01:11:49.486502 | orchestrator | 2026-03-10 01:11:49.486506 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-10 01:11:49.486511 | orchestrator | Tuesday 10 March 2026 01:06:24 +0000 (0:00:09.730) 0:00:44.024 ********* 2026-03-10 01:11:49.486515 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.486526 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.486530 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.486535 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.486540 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.486547 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.486554 | orchestrator | 2026-03-10 01:11:49.486561 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-10 01:11:49.486571 | orchestrator | Tuesday 10 March 2026 01:06:28 +0000 (0:00:03.461) 0:00:47.486 ********* 2026-03-10 01:11:49.486581 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.486588 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.486595 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.486602 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.486609 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.486616 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.486623 | orchestrator | 2026-03-10 01:11:49.486630 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-10 01:11:49.486638 | orchestrator | Tuesday 10 March 2026 01:06:33 +0000 (0:00:05.006) 0:00:52.492 ********* 2026-03-10 01:11:49.486645 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:11:49.486652 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:11:49.486659 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:11:49.486666 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:11:49.486672 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:11:49.486678 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:11:49.486692 | orchestrator | 2026-03-10 01:11:49.486702 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-10 01:11:49.486709 | orchestrator | Tuesday 10 March 2026 01:06:34 +0000 (0:00:01.464) 0:00:53.957 ********* 2026-03-10 01:11:49.486716 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.486723 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.486730 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.486737 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.486744 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.486751 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.486758 | orchestrator | 2026-03-10 01:11:49.486765 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-10 01:11:49.486773 | orchestrator | Tuesday 10 March 2026 01:06:38 +0000 (0:00:04.179) 0:00:58.137 ********* 2026-03-10 01:11:49.486795 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.486806 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.486820 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.486829 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.486846 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.486858 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.486865 | orchestrator | 2026-03-10 01:11:49.486871 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-10 01:11:49.486878 | orchestrator | Tuesday 10 March 2026 01:06:44 +0000 (0:00:05.566) 0:01:03.703 ********* 2026-03-10 01:11:49.486884 | orchestrator | [WARNING]: Skipped 2026-03-10 01:11:49.486892 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-10 01:11:49.486899 | orchestrator | due to this access issue: 2026-03-10 01:11:49.486906 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-10 01:11:49.486913 | orchestrator | a directory 2026-03-10 01:11:49.486920 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:11:49.486926 | orchestrator | 2026-03-10 01:11:49.486932 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-10 01:11:49.486939 | orchestrator | Tuesday 10 March 2026 01:06:45 +0000 (0:00:01.000) 0:01:04.704 ********* 2026-03-10 01:11:49.486946 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:11:49.486954 | orchestrator | 2026-03-10 01:11:49.486960 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-10 01:11:49.486966 | orchestrator | Tuesday 10 March 2026 01:06:46 +0000 (0:00:01.475) 0:01:06.179 ********* 2026-03-10 01:11:49.486978 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.486995 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.487003 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.487029 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.487038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.487046 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.487060 | orchestrator | 2026-03-10 01:11:49.487079 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-10 01:11:49.487087 | orchestrator | Tuesday 10 March 2026 01:06:52 +0000 (0:00:05.492) 0:01:11.672 ********* 2026-03-10 01:11:49.487096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.487104 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.487117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.487125 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.487133 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.487141 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.487149 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487163 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.487172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487177 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.487181 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487186 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.487190 | orchestrator | 2026-03-10 01:11:49.487195 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-10 01:11:49.487199 | orchestrator | Tuesday 10 March 2026 01:06:59 +0000 (0:00:07.695) 0:01:19.367 ********* 2026-03-10 01:11:49.487210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.487215 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.487220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.487224 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.487255 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487262 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.487271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.487276 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.487281 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487286 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.487295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487300 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.487305 | orchestrator | 2026-03-10 01:11:49.487309 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-10 01:11:49.487314 | orchestrator | Tuesday 10 March 2026 01:07:07 +0000 (0:00:07.505) 0:01:26.873 ********* 2026-03-10 01:11:49.487319 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.487323 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.487328 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.487332 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.487337 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.487346 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.487351 | orchestrator | 2026-03-10 01:11:49.487355 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-10 01:11:49.487360 | orchestrator | Tuesday 10 March 2026 01:07:12 +0000 (0:00:04.674) 0:01:31.548 ********* 2026-03-10 01:11:49.487364 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.487369 | orchestrator | 2026-03-10 01:11:49.487373 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-10 01:11:49.487378 | orchestrator | Tuesday 10 March 2026 01:07:12 +0000 (0:00:00.204) 0:01:31.752 ********* 2026-03-10 01:11:49.487382 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.487387 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.487391 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.487396 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.487400 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.487405 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.487409 | orchestrator | 2026-03-10 01:11:49.487414 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-10 01:11:49.487418 | orchestrator | Tuesday 10 March 2026 01:07:13 +0000 (0:00:01.329) 0:01:33.082 ********* 2026-03-10 01:11:49.487426 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.487431 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.487435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.487440 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.487452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487457 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.487466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.487471 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.487476 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487480 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.487488 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487493 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.487497 | orchestrator | 2026-03-10 01:11:49.487502 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-10 01:11:49.487507 | orchestrator | Tuesday 10 March 2026 01:07:18 +0000 (0:00:04.360) 0:01:37.442 ********* 2026-03-10 01:11:49.487511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.487521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.487531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.487536 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.487544 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.487550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.487554 | orchestrator | 2026-03-10 01:11:49.487559 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-10 01:11:49.487564 | orchestrator | Tuesday 10 March 2026 01:07:23 +0000 (0:00:05.456) 0:01:42.899 ********* 2026-03-10 01:11:49.487576 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.487581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.487590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.487595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.487600 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.487613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.487618 | orchestrator | 2026-03-10 01:11:49.487623 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-10 01:11:49.487627 | orchestrator | Tuesday 10 March 2026 01:07:31 +0000 (0:00:07.923) 0:01:50.823 ********* 2026-03-10 01:11:49.487632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.487638 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.487644 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.487649 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.487655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.487665 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.487739 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487756 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.487762 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487767 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.487773 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487779 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.487784 | orchestrator | 2026-03-10 01:11:49.487790 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-10 01:11:49.487795 | orchestrator | Tuesday 10 March 2026 01:07:34 +0000 (0:00:03.130) 0:01:53.953 ********* 2026-03-10 01:11:49.487800 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.487804 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.487809 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.487814 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:11:49.487818 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:11:49.487823 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:11:49.487827 | orchestrator | 2026-03-10 01:11:49.487835 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-10 01:11:49.487840 | orchestrator | Tuesday 10 March 2026 01:07:38 +0000 (0:00:04.354) 0:01:58.308 ********* 2026-03-10 01:11:49.487845 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487854 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.487859 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487863 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.487873 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.487878 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.487882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.487891 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.487896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.487905 | orchestrator | 2026-03-10 01:11:49.487910 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-10 01:11:49.487914 | orchestrator | Tuesday 10 March 2026 01:07:45 +0000 (0:00:06.483) 0:02:04.792 ********* 2026-03-10 01:11:49.487919 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.487923 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.487928 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.487932 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.487937 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.487941 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.487946 | orchestrator | 2026-03-10 01:11:49.487950 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-10 01:11:49.487955 | orchestrator | Tuesday 10 March 2026 01:07:47 +0000 (0:00:02.436) 0:02:07.229 ********* 2026-03-10 01:11:49.487960 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.487964 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.487968 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.487973 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.487977 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.487985 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.487990 | orchestrator | 2026-03-10 01:11:49.487994 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-10 01:11:49.487999 | orchestrator | Tuesday 10 March 2026 01:07:52 +0000 (0:00:05.140) 0:02:12.369 ********* 2026-03-10 01:11:49.488003 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488008 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488012 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488017 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488021 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488026 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488030 | orchestrator | 2026-03-10 01:11:49.488035 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-10 01:11:49.488039 | orchestrator | Tuesday 10 March 2026 01:07:56 +0000 (0:00:03.926) 0:02:16.296 ********* 2026-03-10 01:11:49.488044 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488048 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488053 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488058 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488062 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488066 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488071 | orchestrator | 2026-03-10 01:11:49.488076 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-10 01:11:49.488080 | orchestrator | Tuesday 10 March 2026 01:07:59 +0000 (0:00:03.071) 0:02:19.368 ********* 2026-03-10 01:11:49.488085 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488089 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488094 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488098 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488103 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488108 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488112 | orchestrator | 2026-03-10 01:11:49.488117 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-10 01:11:49.488126 | orchestrator | Tuesday 10 March 2026 01:08:04 +0000 (0:00:04.813) 0:02:24.182 ********* 2026-03-10 01:11:49.488131 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488135 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488140 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488145 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488149 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488153 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488158 | orchestrator | 2026-03-10 01:11:49.488162 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-10 01:11:49.488167 | orchestrator | Tuesday 10 March 2026 01:08:08 +0000 (0:00:04.034) 0:02:28.219 ********* 2026-03-10 01:11:49.488172 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-10 01:11:49.488176 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488181 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-10 01:11:49.488185 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488190 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-10 01:11:49.488194 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488199 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-10 01:11:49.488203 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488211 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-10 01:11:49.488216 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488220 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-10 01:11:49.488225 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488229 | orchestrator | 2026-03-10 01:11:49.488251 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-10 01:11:49.488256 | orchestrator | Tuesday 10 March 2026 01:08:12 +0000 (0:00:03.898) 0:02:32.118 ********* 2026-03-10 01:11:49.488261 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.488266 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.488283 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.488293 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488298 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.488306 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.488311 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488315 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.488325 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488329 | orchestrator | 2026-03-10 01:11:49.488334 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-10 01:11:49.488338 | orchestrator | Tuesday 10 March 2026 01:08:16 +0000 (0:00:04.109) 0:02:36.228 ********* 2026-03-10 01:11:49.488346 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.488357 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.488366 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488374 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.488379 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488384 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.488389 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488396 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.488405 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.488414 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488418 | orchestrator | 2026-03-10 01:11:49.488423 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-10 01:11:49.488428 | orchestrator | Tuesday 10 March 2026 01:08:21 +0000 (0:00:04.970) 0:02:41.198 ********* 2026-03-10 01:11:49.488432 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488437 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488441 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488446 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488450 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488454 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488459 | orchestrator | 2026-03-10 01:11:49.488463 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-10 01:11:49.488468 | orchestrator | Tuesday 10 March 2026 01:08:27 +0000 (0:00:05.759) 0:02:46.957 ********* 2026-03-10 01:11:49.488472 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488477 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488481 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488486 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:11:49.488491 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:11:49.488495 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:11:49.488500 | orchestrator | 2026-03-10 01:11:49.488505 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-10 01:11:49.488509 | orchestrator | Tuesday 10 March 2026 01:08:36 +0000 (0:00:08.991) 0:02:55.949 ********* 2026-03-10 01:11:49.488514 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488518 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488522 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488527 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488531 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488536 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488540 | orchestrator | 2026-03-10 01:11:49.488545 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-10 01:11:49.488549 | orchestrator | Tuesday 10 March 2026 01:08:41 +0000 (0:00:05.392) 0:03:01.341 ********* 2026-03-10 01:11:49.488554 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488561 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488566 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488570 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488575 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488579 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488584 | orchestrator | 2026-03-10 01:11:49.488588 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-10 01:11:49.488593 | orchestrator | Tuesday 10 March 2026 01:08:46 +0000 (0:00:04.081) 0:03:05.422 ********* 2026-03-10 01:11:49.488597 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488602 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488611 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488616 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488620 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488625 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488629 | orchestrator | 2026-03-10 01:11:49.488634 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-10 01:11:49.488638 | orchestrator | Tuesday 10 March 2026 01:08:49 +0000 (0:00:03.015) 0:03:08.437 ********* 2026-03-10 01:11:49.488643 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488647 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488652 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488656 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488661 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488665 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488670 | orchestrator | 2026-03-10 01:11:49.488674 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-10 01:11:49.488679 | orchestrator | Tuesday 10 March 2026 01:08:52 +0000 (0:00:03.930) 0:03:12.368 ********* 2026-03-10 01:11:49.488683 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488688 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488692 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488697 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488701 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488705 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488710 | orchestrator | 2026-03-10 01:11:49.488714 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-10 01:11:49.488719 | orchestrator | Tuesday 10 March 2026 01:08:56 +0000 (0:00:03.050) 0:03:15.419 ********* 2026-03-10 01:11:49.488723 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488728 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488732 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488737 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488742 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488746 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488751 | orchestrator | 2026-03-10 01:11:49.488758 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-10 01:11:49.488763 | orchestrator | Tuesday 10 March 2026 01:08:59 +0000 (0:00:03.370) 0:03:18.789 ********* 2026-03-10 01:11:49.488767 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488772 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488776 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488781 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488785 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488790 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488795 | orchestrator | 2026-03-10 01:11:49.488799 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-10 01:11:49.488804 | orchestrator | Tuesday 10 March 2026 01:09:02 +0000 (0:00:02.949) 0:03:21.739 ********* 2026-03-10 01:11:49.488808 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-10 01:11:49.488814 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488818 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-10 01:11:49.488823 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488827 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-10 01:11:49.488832 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488836 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-10 01:11:49.488841 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488845 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-10 01:11:49.488856 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488861 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-10 01:11:49.488865 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488870 | orchestrator | 2026-03-10 01:11:49.488875 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-10 01:11:49.488879 | orchestrator | Tuesday 10 March 2026 01:09:04 +0000 (0:00:02.290) 0:03:24.029 ********* 2026-03-10 01:11:49.488886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.488891 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.488896 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.488901 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.488909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-10 01:11:49.488913 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.488918 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.488930 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.488934 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.488939 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.488957 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-10 01:11:49.488962 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.488966 | orchestrator | 2026-03-10 01:11:49.488971 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-10 01:11:49.488975 | orchestrator | Tuesday 10 March 2026 01:09:06 +0000 (0:00:02.004) 0:03:26.033 ********* 2026-03-10 01:11:49.488980 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.489042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.489048 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.489057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-10 01:11:49.489066 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.489071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-10 01:11:49.489076 | orchestrator | 2026-03-10 01:11:49.489081 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-10 01:11:49.489085 | orchestrator | Tuesday 10 March 2026 01:09:10 +0000 (0:00:04.136) 0:03:30.170 ********* 2026-03-10 01:11:49.489090 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:11:49.489094 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:11:49.489099 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:11:49.489103 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:11:49.489108 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:11:49.489112 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:11:49.489117 | orchestrator | 2026-03-10 01:11:49.489122 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-10 01:11:49.489126 | orchestrator | Tuesday 10 March 2026 01:09:12 +0000 (0:00:01.955) 0:03:32.126 ********* 2026-03-10 01:11:49.489133 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:11:49.489142 | orchestrator | 2026-03-10 01:11:49.489146 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-10 01:11:49.489151 | orchestrator | Tuesday 10 March 2026 01:09:15 +0000 (0:00:02.724) 0:03:34.851 ********* 2026-03-10 01:11:49.489155 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:11:49.489160 | orchestrator | 2026-03-10 01:11:49.489164 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-10 01:11:49.489169 | orchestrator | Tuesday 10 March 2026 01:09:18 +0000 (0:00:03.394) 0:03:38.245 ********* 2026-03-10 01:11:49.489174 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:11:49.489178 | orchestrator | 2026-03-10 01:11:49.489183 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-10 01:11:49.489187 | orchestrator | Tuesday 10 March 2026 01:10:21 +0000 (0:01:02.386) 0:04:40.631 ********* 2026-03-10 01:11:49.489192 | orchestrator | 2026-03-10 01:11:49.489196 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-10 01:11:49.489201 | orchestrator | Tuesday 10 March 2026 01:10:21 +0000 (0:00:00.103) 0:04:40.735 ********* 2026-03-10 01:11:49.489205 | orchestrator | 2026-03-10 01:11:49.489210 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-10 01:11:49.489214 | orchestrator | Tuesday 10 March 2026 01:10:21 +0000 (0:00:00.314) 0:04:41.049 ********* 2026-03-10 01:11:49.489219 | orchestrator | 2026-03-10 01:11:49.489224 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-10 01:11:49.489228 | orchestrator | Tuesday 10 March 2026 01:10:21 +0000 (0:00:00.083) 0:04:41.133 ********* 2026-03-10 01:11:49.489233 | orchestrator | 2026-03-10 01:11:49.489259 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-10 01:11:49.489263 | orchestrator | Tuesday 10 March 2026 01:10:21 +0000 (0:00:00.080) 0:04:41.213 ********* 2026-03-10 01:11:49.489268 | orchestrator | 2026-03-10 01:11:49.489272 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-10 01:11:49.489277 | orchestrator | Tuesday 10 March 2026 01:10:21 +0000 (0:00:00.076) 0:04:41.290 ********* 2026-03-10 01:11:49.489282 | orchestrator | 2026-03-10 01:11:49.489286 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-10 01:11:49.489291 | orchestrator | Tuesday 10 March 2026 01:10:21 +0000 (0:00:00.074) 0:04:41.365 ********* 2026-03-10 01:11:49.489295 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:11:49.489300 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:11:49.489304 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:11:49.489309 | orchestrator | 2026-03-10 01:11:49.489314 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-10 01:11:49.489318 | orchestrator | Tuesday 10 March 2026 01:10:46 +0000 (0:00:24.371) 0:05:05.736 ********* 2026-03-10 01:11:49.489323 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:11:49.489327 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:11:49.489332 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:11:49.489336 | orchestrator | 2026-03-10 01:11:49.489341 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:11:49.489346 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 01:11:49.489354 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-10 01:11:49.489359 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-10 01:11:49.489363 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 01:11:49.489368 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 01:11:49.489377 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-10 01:11:49.489381 | orchestrator | 2026-03-10 01:11:49.489386 | orchestrator | 2026-03-10 01:11:49.489390 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:11:49.489395 | orchestrator | Tuesday 10 March 2026 01:11:48 +0000 (0:01:02.240) 0:06:07.976 ********* 2026-03-10 01:11:49.489399 | orchestrator | =============================================================================== 2026-03-10 01:11:49.489404 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 62.39s 2026-03-10 01:11:49.489409 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 62.24s 2026-03-10 01:11:49.489413 | orchestrator | neutron : Restart neutron-server container ----------------------------- 24.37s 2026-03-10 01:11:49.489418 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 9.73s 2026-03-10 01:11:49.489422 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 8.99s 2026-03-10 01:11:49.489427 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.92s 2026-03-10 01:11:49.489432 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.70s 2026-03-10 01:11:49.489436 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 7.70s 2026-03-10 01:11:49.489441 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 7.51s 2026-03-10 01:11:49.489445 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 6.48s 2026-03-10 01:11:49.489453 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 5.76s 2026-03-10 01:11:49.489457 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 5.57s 2026-03-10 01:11:49.489462 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.49s 2026-03-10 01:11:49.489466 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.46s 2026-03-10 01:11:49.489471 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 5.39s 2026-03-10 01:11:49.489476 | orchestrator | neutron : Copying over openvswitch_agent.ini ---------------------------- 5.14s 2026-03-10 01:11:49.489480 | orchestrator | Load and persist kernel modules ----------------------------------------- 5.01s 2026-03-10 01:11:49.489485 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 4.97s 2026-03-10 01:11:49.489489 | orchestrator | neutron : Copying over eswitchd.conf ------------------------------------ 4.82s 2026-03-10 01:11:49.489494 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 4.68s 2026-03-10 01:11:49.489498 | orchestrator | 2026-03-10 01:11:49 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:49.489503 | orchestrator | 2026-03-10 01:11:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:52.533071 | orchestrator | 2026-03-10 01:11:52 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:11:52.535291 | orchestrator | 2026-03-10 01:11:52 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:11:52.536826 | orchestrator | 2026-03-10 01:11:52 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:52.538725 | orchestrator | 2026-03-10 01:11:52 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:52.538771 | orchestrator | 2026-03-10 01:11:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:55.584896 | orchestrator | 2026-03-10 01:11:55 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:11:55.588504 | orchestrator | 2026-03-10 01:11:55 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:11:55.590444 | orchestrator | 2026-03-10 01:11:55 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:55.593151 | orchestrator | 2026-03-10 01:11:55 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:55.593209 | orchestrator | 2026-03-10 01:11:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:11:58.642077 | orchestrator | 2026-03-10 01:11:58 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:11:58.643886 | orchestrator | 2026-03-10 01:11:58 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:11:58.645682 | orchestrator | 2026-03-10 01:11:58 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:11:58.647679 | orchestrator | 2026-03-10 01:11:58 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:11:58.647822 | orchestrator | 2026-03-10 01:11:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:12:01.697917 | orchestrator | 2026-03-10 01:12:01 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:12:01.700764 | orchestrator | 2026-03-10 01:12:01 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:12:01.703861 | orchestrator | 2026-03-10 01:12:01 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:12:01.707118 | orchestrator | 2026-03-10 01:12:01 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state STARTED 2026-03-10 01:12:01.707204 | orchestrator | 2026-03-10 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:04.873833 | orchestrator | 2026-03-10 01:14:04 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:04.875038 | orchestrator | 2026-03-10 01:14:04 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:14:04.876378 | orchestrator | 2026-03-10 01:14:04 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:04.882194 | orchestrator | 2026-03-10 01:14:04 | INFO  | Task 25690181-3fcc-49c6-a958-cc970e4b23e8 is in state SUCCESS 2026-03-10 01:14:04.884580 | orchestrator | 2026-03-10 01:14:04.884613 | orchestrator | 2026-03-10 01:14:04.884618 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:14:04.884623 | orchestrator | 2026-03-10 01:14:04.884627 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:14:04.884632 | orchestrator | Tuesday 10 March 2026 01:10:08 +0000 (0:00:00.492) 0:00:00.492 ********* 2026-03-10 01:14:04.884637 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:04.884663 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:14:04.884669 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:14:04.884673 | orchestrator | 2026-03-10 01:14:04.884677 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:14:04.884681 | orchestrator | Tuesday 10 March 2026 01:10:08 +0000 (0:00:00.395) 0:00:00.888 ********* 2026-03-10 01:14:04.884685 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-10 01:14:04.884690 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-10 01:14:04.884694 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-10 01:14:04.884698 | orchestrator | 2026-03-10 01:14:04.884702 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-10 01:14:04.884706 | orchestrator | 2026-03-10 01:14:04.884709 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-10 01:14:04.884713 | orchestrator | Tuesday 10 March 2026 01:10:09 +0000 (0:00:00.484) 0:00:01.372 ********* 2026-03-10 01:14:04.884717 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:04.884740 | orchestrator | 2026-03-10 01:14:04.884744 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-10 01:14:04.884748 | orchestrator | Tuesday 10 March 2026 01:10:10 +0000 (0:00:00.640) 0:00:02.013 ********* 2026-03-10 01:14:04.884752 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-10 01:14:04.884756 | orchestrator | 2026-03-10 01:14:04.884760 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-10 01:14:04.884764 | orchestrator | Tuesday 10 March 2026 01:10:13 +0000 (0:00:03.621) 0:00:05.635 ********* 2026-03-10 01:14:04.884767 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-10 01:14:04.884772 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-10 01:14:04.884775 | orchestrator | 2026-03-10 01:14:04.884779 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-10 01:14:04.884783 | orchestrator | Tuesday 10 March 2026 01:10:21 +0000 (0:00:07.513) 0:00:13.149 ********* 2026-03-10 01:14:04.884787 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:14:04.884791 | orchestrator | 2026-03-10 01:14:04.884795 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-10 01:14:04.884798 | orchestrator | Tuesday 10 March 2026 01:10:24 +0000 (0:00:03.665) 0:00:16.814 ********* 2026-03-10 01:14:04.884802 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-10 01:14:04.884806 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:14:04.884811 | orchestrator | 2026-03-10 01:14:04.884814 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-10 01:14:04.884818 | orchestrator | Tuesday 10 March 2026 01:10:28 +0000 (0:00:04.120) 0:00:20.934 ********* 2026-03-10 01:14:04.884822 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:14:04.884826 | orchestrator | 2026-03-10 01:14:04.884830 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-10 01:14:04.884834 | orchestrator | Tuesday 10 March 2026 01:10:32 +0000 (0:00:03.727) 0:00:24.662 ********* 2026-03-10 01:14:04.884837 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-10 01:14:04.884841 | orchestrator | 2026-03-10 01:14:04.884854 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-10 01:14:04.884858 | orchestrator | Tuesday 10 March 2026 01:10:36 +0000 (0:00:03.870) 0:00:28.532 ********* 2026-03-10 01:14:04.884862 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:04.884866 | orchestrator | 2026-03-10 01:14:04.884869 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-10 01:14:04.884873 | orchestrator | Tuesday 10 March 2026 01:10:39 +0000 (0:00:03.345) 0:00:31.878 ********* 2026-03-10 01:14:04.884877 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:04.884881 | orchestrator | 2026-03-10 01:14:04.884884 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-10 01:14:04.884888 | orchestrator | Tuesday 10 March 2026 01:10:44 +0000 (0:00:04.150) 0:00:36.028 ********* 2026-03-10 01:14:04.884892 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:04.884895 | orchestrator | 2026-03-10 01:14:04.884899 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-10 01:14:04.884903 | orchestrator | Tuesday 10 March 2026 01:10:47 +0000 (0:00:03.813) 0:00:39.842 ********* 2026-03-10 01:14:04.884918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.884932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.884936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.884944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.884950 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.884958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.884966 | orchestrator | 2026-03-10 01:14:04.884970 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-10 01:14:04.884974 | orchestrator | Tuesday 10 March 2026 01:10:49 +0000 (0:00:01.825) 0:00:41.667 ********* 2026-03-10 01:14:04.884977 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:04.884981 | orchestrator | 2026-03-10 01:14:04.884985 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-10 01:14:04.884989 | orchestrator | Tuesday 10 March 2026 01:10:49 +0000 (0:00:00.240) 0:00:41.907 ********* 2026-03-10 01:14:04.884992 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:04.884996 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:04.885000 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:04.885004 | orchestrator | 2026-03-10 01:14:04.885007 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-10 01:14:04.885011 | orchestrator | Tuesday 10 March 2026 01:10:50 +0000 (0:00:00.776) 0:00:42.684 ********* 2026-03-10 01:14:04.885015 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:14:04.885019 | orchestrator | 2026-03-10 01:14:04.885023 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-10 01:14:04.885026 | orchestrator | Tuesday 10 March 2026 01:10:51 +0000 (0:00:01.060) 0:00:43.745 ********* 2026-03-10 01:14:04.885030 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885074 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885082 | orchestrator | 2026-03-10 01:14:04.885086 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-10 01:14:04.885090 | orchestrator | Tuesday 10 March 2026 01:10:55 +0000 (0:00:03.263) 0:00:47.008 ********* 2026-03-10 01:14:04.885093 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:04.885097 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:14:04.885101 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:14:04.885105 | orchestrator | 2026-03-10 01:14:04.885109 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-10 01:14:04.885113 | orchestrator | Tuesday 10 March 2026 01:10:55 +0000 (0:00:00.422) 0:00:47.430 ********* 2026-03-10 01:14:04.885116 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:04.885120 | orchestrator | 2026-03-10 01:14:04.885175 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-10 01:14:04.885180 | orchestrator | Tuesday 10 March 2026 01:10:56 +0000 (0:00:01.139) 0:00:48.569 ********* 2026-03-10 01:14:04.885187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885210 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885215 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885230 | orchestrator | 2026-03-10 01:14:04.885235 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-10 01:14:04.885240 | orchestrator | Tuesday 10 March 2026 01:10:59 +0000 (0:00:03.134) 0:00:51.704 ********* 2026-03-10 01:14:04.885250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:04.885257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:04.885263 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:04.885270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:04.885281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:04.885335 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:04.885345 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:04.885357 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:04.885364 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:04.885370 | orchestrator | 2026-03-10 01:14:04.885374 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-10 01:14:04.885378 | orchestrator | Tuesday 10 March 2026 01:11:00 +0000 (0:00:00.651) 0:00:52.356 ********* 2026-03-10 01:14:04.885382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:04.885386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:04.885394 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:04.885401 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:04.885405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:04.885409 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:04.885417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:04.885421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:04.885425 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:04.885429 | orchestrator | 2026-03-10 01:14:04.885433 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-10 01:14:04.885437 | orchestrator | Tuesday 10 March 2026 01:11:01 +0000 (0:00:01.184) 0:00:53.541 ********* 2026-03-10 01:14:04.885441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885574 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885586 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885598 | orchestrator | 2026-03-10 01:14:04.885604 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-10 01:14:04.885610 | orchestrator | Tuesday 10 March 2026 01:11:03 +0000 (0:00:02.397) 0:00:55.939 ********* 2026-03-10 01:14:04.885620 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885632 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885645 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885670 | orchestrator | 2026-03-10 01:14:04.885674 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-10 01:14:04.885681 | orchestrator | Tuesday 10 March 2026 01:11:09 +0000 (0:00:05.218) 0:01:01.158 ********* 2026-03-10 01:14:04.885691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:04.885698 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:04.885704 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:04.885710 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:04.885725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:04.885731 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:04.885737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-10 01:14:04.885747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:14:04.885754 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:04.885760 | orchestrator | 2026-03-10 01:14:04.885767 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-10 01:14:04.885773 | orchestrator | Tuesday 10 March 2026 01:11:09 +0000 (0:00:00.818) 0:01:01.976 ********* 2026-03-10 01:14:04.885780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885801 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-10 01:14:04.885807 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885818 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:04.885852 | orchestrator | 2026-03-10 01:14:04.885858 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-10 01:14:04.885872 | orchestrator | Tuesday 10 March 2026 01:11:12 +0000 (0:00:02.855) 0:01:04.832 ********* 2026-03-10 01:14:04.885879 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:04.885885 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:04.885890 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:04.885896 | orchestrator | 2026-03-10 01:14:04.885903 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-10 01:14:04.885909 | orchestrator | Tuesday 10 March 2026 01:11:13 +0000 (0:00:00.327) 0:01:05.159 ********* 2026-03-10 01:14:04.885916 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:04.885922 | orchestrator | 2026-03-10 01:14:04.885928 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-10 01:14:04.885935 | orchestrator | Tuesday 10 March 2026 01:11:15 +0000 (0:00:02.307) 0:01:07.467 ********* 2026-03-10 01:14:04.885941 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:04.885947 | orchestrator | 2026-03-10 01:14:04.885953 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-10 01:14:04.885960 | orchestrator | Tuesday 10 March 2026 01:11:17 +0000 (0:00:02.356) 0:01:09.823 ********* 2026-03-10 01:14:04.885966 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:04.885972 | orchestrator | 2026-03-10 01:14:04.885978 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-10 01:14:04.885984 | orchestrator | Tuesday 10 March 2026 01:11:33 +0000 (0:00:15.952) 0:01:25.776 ********* 2026-03-10 01:14:04.885990 | orchestrator | 2026-03-10 01:14:04.885997 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-10 01:14:04.886003 | orchestrator | Tuesday 10 March 2026 01:11:33 +0000 (0:00:00.077) 0:01:25.853 ********* 2026-03-10 01:14:04.886009 | orchestrator | 2026-03-10 01:14:04.886048 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-10 01:14:04.886057 | orchestrator | Tuesday 10 March 2026 01:11:33 +0000 (0:00:00.089) 0:01:25.942 ********* 2026-03-10 01:14:04.886063 | orchestrator | 2026-03-10 01:14:04.886074 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-10 01:14:04.886080 | orchestrator | Tuesday 10 March 2026 01:11:34 +0000 (0:00:00.073) 0:01:26.016 ********* 2026-03-10 01:14:04.886086 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:04.886093 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:04.886099 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:04.886106 | orchestrator | 2026-03-10 01:14:04.886113 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-10 01:14:04.886119 | orchestrator | Tuesday 10 March 2026 01:11:54 +0000 (0:00:20.317) 0:01:46.334 ********* 2026-03-10 01:14:04.886139 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:04.886146 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:04.886152 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:04.886158 | orchestrator | 2026-03-10 01:14:04.886165 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:14:04.886171 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-10 01:14:04.886178 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 01:14:04.886193 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 01:14:04.886200 | orchestrator | 2026-03-10 01:14:04.886207 | orchestrator | 2026-03-10 01:14:04.886213 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:14:04.886219 | orchestrator | Tuesday 10 March 2026 01:12:09 +0000 (0:00:15.636) 0:02:01.970 ********* 2026-03-10 01:14:04.886226 | orchestrator | =============================================================================== 2026-03-10 01:14:04.886233 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 20.32s 2026-03-10 01:14:04.886243 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.95s 2026-03-10 01:14:04.886250 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.64s 2026-03-10 01:14:04.886256 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.51s 2026-03-10 01:14:04.886263 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 5.22s 2026-03-10 01:14:04.886269 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.15s 2026-03-10 01:14:04.886275 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 4.12s 2026-03-10 01:14:04.886282 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.87s 2026-03-10 01:14:04.886289 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.81s 2026-03-10 01:14:04.886295 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.73s 2026-03-10 01:14:04.886301 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.67s 2026-03-10 01:14:04.886308 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.62s 2026-03-10 01:14:04.886314 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.35s 2026-03-10 01:14:04.886321 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 3.26s 2026-03-10 01:14:04.886327 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 3.13s 2026-03-10 01:14:04.886334 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.86s 2026-03-10 01:14:04.886341 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.40s 2026-03-10 01:14:04.886347 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.36s 2026-03-10 01:14:04.886353 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.31s 2026-03-10 01:14:04.886360 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.83s 2026-03-10 01:14:04.886366 | orchestrator | 2026-03-10 01:14:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:07.924473 | orchestrator | 2026-03-10 01:14:07 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:07.925500 | orchestrator | 2026-03-10 01:14:07 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:07.928246 | orchestrator | 2026-03-10 01:14:07 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:14:07.930306 | orchestrator | 2026-03-10 01:14:07 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:07.930338 | orchestrator | 2026-03-10 01:14:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:10.967682 | orchestrator | 2026-03-10 01:14:10 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:10.967865 | orchestrator | 2026-03-10 01:14:10 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:10.969262 | orchestrator | 2026-03-10 01:14:10 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:14:10.969759 | orchestrator | 2026-03-10 01:14:10 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:10.969827 | orchestrator | 2026-03-10 01:14:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:14.014416 | orchestrator | 2026-03-10 01:14:14 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:14.017310 | orchestrator | 2026-03-10 01:14:14 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:14.018419 | orchestrator | 2026-03-10 01:14:14 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:14:14.020514 | orchestrator | 2026-03-10 01:14:14 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:14.020557 | orchestrator | 2026-03-10 01:14:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:17.073225 | orchestrator | 2026-03-10 01:14:17 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:17.074615 | orchestrator | 2026-03-10 01:14:17 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:17.076425 | orchestrator | 2026-03-10 01:14:17 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:14:17.077780 | orchestrator | 2026-03-10 01:14:17 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:17.077816 | orchestrator | 2026-03-10 01:14:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:20.112728 | orchestrator | 2026-03-10 01:14:20 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:20.113321 | orchestrator | 2026-03-10 01:14:20 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:20.114333 | orchestrator | 2026-03-10 01:14:20 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:14:20.115531 | orchestrator | 2026-03-10 01:14:20 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:20.115593 | orchestrator | 2026-03-10 01:14:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:23.153847 | orchestrator | 2026-03-10 01:14:23 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:23.154605 | orchestrator | 2026-03-10 01:14:23 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:23.155622 | orchestrator | 2026-03-10 01:14:23 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:14:23.156310 | orchestrator | 2026-03-10 01:14:23 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:23.156342 | orchestrator | 2026-03-10 01:14:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:26.226548 | orchestrator | 2026-03-10 01:14:26 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:26.228148 | orchestrator | 2026-03-10 01:14:26 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:26.229433 | orchestrator | 2026-03-10 01:14:26 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:14:26.230389 | orchestrator | 2026-03-10 01:14:26 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:26.230447 | orchestrator | 2026-03-10 01:14:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:29.266174 | orchestrator | 2026-03-10 01:14:29 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:29.267209 | orchestrator | 2026-03-10 01:14:29 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:29.267823 | orchestrator | 2026-03-10 01:14:29 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:14:29.269809 | orchestrator | 2026-03-10 01:14:29 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:29.269857 | orchestrator | 2026-03-10 01:14:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:32.331615 | orchestrator | 2026-03-10 01:14:32 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:32.334001 | orchestrator | 2026-03-10 01:14:32 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:32.336631 | orchestrator | 2026-03-10 01:14:32 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:14:32.339433 | orchestrator | 2026-03-10 01:14:32 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:32.339495 | orchestrator | 2026-03-10 01:14:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:35.385905 | orchestrator | 2026-03-10 01:14:35 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:35.387943 | orchestrator | 2026-03-10 01:14:35 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:35.390545 | orchestrator | 2026-03-10 01:14:35 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:14:35.393558 | orchestrator | 2026-03-10 01:14:35 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:35.393629 | orchestrator | 2026-03-10 01:14:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:38.443661 | orchestrator | 2026-03-10 01:14:38 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:38.445417 | orchestrator | 2026-03-10 01:14:38 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:38.450575 | orchestrator | 2026-03-10 01:14:38 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:14:38.450651 | orchestrator | 2026-03-10 01:14:38 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:38.450672 | orchestrator | 2026-03-10 01:14:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:41.491995 | orchestrator | 2026-03-10 01:14:41 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:41.493852 | orchestrator | 2026-03-10 01:14:41 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:41.497022 | orchestrator | 2026-03-10 01:14:41 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state STARTED 2026-03-10 01:14:41.499833 | orchestrator | 2026-03-10 01:14:41 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:41.499905 | orchestrator | 2026-03-10 01:14:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:44.540959 | orchestrator | 2026-03-10 01:14:44 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:44.541601 | orchestrator | 2026-03-10 01:14:44 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:44.544214 | orchestrator | 2026-03-10 01:14:44 | INFO  | Task 6b76fefb-d559-4284-a0ec-0c557206eee7 is in state SUCCESS 2026-03-10 01:14:44.546348 | orchestrator | 2026-03-10 01:14:44.546395 | orchestrator | 2026-03-10 01:14:44.546410 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:14:44.546423 | orchestrator | 2026-03-10 01:14:44.546436 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:14:44.546479 | orchestrator | Tuesday 10 March 2026 01:11:39 +0000 (0:00:00.429) 0:00:00.429 ********* 2026-03-10 01:14:44.546492 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:44.546507 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:14:44.546520 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:14:44.546582 | orchestrator | 2026-03-10 01:14:44.546599 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:14:44.546637 | orchestrator | Tuesday 10 March 2026 01:11:39 +0000 (0:00:00.345) 0:00:00.775 ********* 2026-03-10 01:14:44.546655 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-10 01:14:44.546669 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-10 01:14:44.546683 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-10 01:14:44.546697 | orchestrator | 2026-03-10 01:14:44.546709 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-10 01:14:44.546723 | orchestrator | 2026-03-10 01:14:44.546736 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-10 01:14:44.546751 | orchestrator | Tuesday 10 March 2026 01:11:40 +0000 (0:00:00.618) 0:00:01.393 ********* 2026-03-10 01:14:44.546764 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:44.546778 | orchestrator | 2026-03-10 01:14:44.546791 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-10 01:14:44.546805 | orchestrator | Tuesday 10 March 2026 01:11:40 +0000 (0:00:00.735) 0:00:02.129 ********* 2026-03-10 01:14:44.546817 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-10 01:14:44.546828 | orchestrator | 2026-03-10 01:14:44.546840 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-10 01:14:44.546853 | orchestrator | Tuesday 10 March 2026 01:11:44 +0000 (0:00:03.690) 0:00:05.820 ********* 2026-03-10 01:14:44.546866 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-10 01:14:44.546879 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-10 01:14:44.546891 | orchestrator | 2026-03-10 01:14:44.546904 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-10 01:14:44.546916 | orchestrator | Tuesday 10 March 2026 01:11:51 +0000 (0:00:06.890) 0:00:12.710 ********* 2026-03-10 01:14:44.546928 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:14:44.546942 | orchestrator | 2026-03-10 01:14:44.546956 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-10 01:14:44.546986 | orchestrator | Tuesday 10 March 2026 01:11:55 +0000 (0:00:03.522) 0:00:16.232 ********* 2026-03-10 01:14:44.547001 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-10 01:14:44.547014 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:14:44.547026 | orchestrator | 2026-03-10 01:14:44.547039 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-10 01:14:44.547051 | orchestrator | Tuesday 10 March 2026 01:11:59 +0000 (0:00:04.037) 0:00:20.270 ********* 2026-03-10 01:14:44.547064 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:14:44.547077 | orchestrator | 2026-03-10 01:14:44.547150 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-10 01:14:44.547164 | orchestrator | Tuesday 10 March 2026 01:12:02 +0000 (0:00:03.665) 0:00:23.935 ********* 2026-03-10 01:14:44.547177 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-10 01:14:44.547191 | orchestrator | 2026-03-10 01:14:44.547203 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-10 01:14:44.547215 | orchestrator | Tuesday 10 March 2026 01:12:06 +0000 (0:00:04.129) 0:00:28.064 ********* 2026-03-10 01:14:44.547260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:14:44.547305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:14:44.547324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:14:44.547347 | orchestrator | 2026-03-10 01:14:44.547361 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-10 01:14:44.547374 | orchestrator | Tuesday 10 March 2026 01:12:10 +0000 (0:00:03.714) 0:00:31.779 ********* 2026-03-10 01:14:44.547387 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:44.547400 | orchestrator | 2026-03-10 01:14:44.547414 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-10 01:14:44.547437 | orchestrator | Tuesday 10 March 2026 01:12:11 +0000 (0:00:00.886) 0:00:32.665 ********* 2026-03-10 01:14:44.547451 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:44.547465 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:44.547478 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:44.547492 | orchestrator | 2026-03-10 01:14:44.547504 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-10 01:14:44.547517 | orchestrator | Tuesday 10 March 2026 01:12:15 +0000 (0:00:04.329) 0:00:36.994 ********* 2026-03-10 01:14:44.547531 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:14:44.547545 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:14:44.547558 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:14:44.547572 | orchestrator | 2026-03-10 01:14:44.547585 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-10 01:14:44.547598 | orchestrator | Tuesday 10 March 2026 01:12:17 +0000 (0:00:01.770) 0:00:38.765 ********* 2026-03-10 01:14:44.547611 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:14:44.547625 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:14:44.547639 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:14:44.547652 | orchestrator | 2026-03-10 01:14:44.547664 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-10 01:14:44.547678 | orchestrator | Tuesday 10 March 2026 01:12:18 +0000 (0:00:01.307) 0:00:40.073 ********* 2026-03-10 01:14:44.547691 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:44.547704 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:14:44.547717 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:14:44.547730 | orchestrator | 2026-03-10 01:14:44.547744 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-10 01:14:44.547757 | orchestrator | Tuesday 10 March 2026 01:12:19 +0000 (0:00:00.886) 0:00:40.959 ********* 2026-03-10 01:14:44.547770 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:44.547783 | orchestrator | 2026-03-10 01:14:44.547795 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-10 01:14:44.547809 | orchestrator | Tuesday 10 March 2026 01:12:19 +0000 (0:00:00.147) 0:00:41.107 ********* 2026-03-10 01:14:44.547821 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:44.547844 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:44.547858 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:44.547872 | orchestrator | 2026-03-10 01:14:44.547892 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-10 01:14:44.547905 | orchestrator | Tuesday 10 March 2026 01:12:20 +0000 (0:00:00.361) 0:00:41.468 ********* 2026-03-10 01:14:44.547918 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:44.547931 | orchestrator | 2026-03-10 01:14:44.547944 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-10 01:14:44.547957 | orchestrator | Tuesday 10 March 2026 01:12:20 +0000 (0:00:00.644) 0:00:42.113 ********* 2026-03-10 01:14:44.547972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:14:44.547999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:14:44.548028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:14:44.548043 | orchestrator | 2026-03-10 01:14:44.548056 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-10 01:14:44.548070 | orchestrator | Tuesday 10 March 2026 01:12:25 +0000 (0:00:04.638) 0:00:46.751 ********* 2026-03-10 01:14:44.548119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 01:14:44.548137 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:44.548157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 01:14:44.548179 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:44.548204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 01:14:44.548220 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:44.548233 | orchestrator | 2026-03-10 01:14:44.548246 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-10 01:14:44.548261 | orchestrator | Tuesday 10 March 2026 01:12:29 +0000 (0:00:04.301) 0:00:51.053 ********* 2026-03-10 01:14:44.548282 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 01:14:44.548305 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:44.548319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 01:14:44.548333 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:44.548357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-10 01:14:44.548379 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:44.548392 | orchestrator | 2026-03-10 01:14:44.548404 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-10 01:14:44.548412 | orchestrator | Tuesday 10 March 2026 01:12:34 +0000 (0:00:04.994) 0:00:56.047 ********* 2026-03-10 01:14:44.548420 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:44.548428 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:44.548435 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:44.548443 | orchestrator | 2026-03-10 01:14:44.548460 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-10 01:14:44.548469 | orchestrator | Tuesday 10 March 2026 01:12:39 +0000 (0:00:04.771) 0:01:00.819 ********* 2026-03-10 01:14:44.548477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:14:44.548494 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:14:44.548513 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:14:44.548522 | orchestrator | 2026-03-10 01:14:44.548530 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-10 01:14:44.548538 | orchestrator | Tuesday 10 March 2026 01:12:44 +0000 (0:00:05.168) 0:01:05.988 ********* 2026-03-10 01:14:44.548546 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:44.548553 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:44.548561 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:44.548569 | orchestrator | 2026-03-10 01:14:44.548577 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-10 01:14:44.548585 | orchestrator | Tuesday 10 March 2026 01:12:51 +0000 (0:00:06.777) 0:01:12.766 ********* 2026-03-10 01:14:44.548593 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:44.548600 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:44.548608 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:44.548616 | orchestrator | 2026-03-10 01:14:44.548623 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-10 01:14:44.548631 | orchestrator | Tuesday 10 March 2026 01:12:55 +0000 (0:00:04.296) 0:01:17.062 ********* 2026-03-10 01:14:44.548639 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:44.548647 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:44.548655 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:44.548662 | orchestrator | 2026-03-10 01:14:44.548670 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-10 01:14:44.548678 | orchestrator | Tuesday 10 March 2026 01:13:00 +0000 (0:00:05.058) 0:01:22.121 ********* 2026-03-10 01:14:44.548686 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:44.548851 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:44.548863 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:44.548870 | orchestrator | 2026-03-10 01:14:44.548878 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-10 01:14:44.548886 | orchestrator | Tuesday 10 March 2026 01:13:06 +0000 (0:00:05.687) 0:01:27.809 ********* 2026-03-10 01:14:44.548894 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:44.548902 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:44.548909 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:44.548917 | orchestrator | 2026-03-10 01:14:44.548925 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-10 01:14:44.548938 | orchestrator | Tuesday 10 March 2026 01:13:11 +0000 (0:00:05.168) 0:01:32.977 ********* 2026-03-10 01:14:44.548951 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:44.548959 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:44.548967 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:44.548974 | orchestrator | 2026-03-10 01:14:44.548982 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-10 01:14:44.548990 | orchestrator | Tuesday 10 March 2026 01:13:12 +0000 (0:00:00.373) 0:01:33.350 ********* 2026-03-10 01:14:44.548997 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-10 01:14:44.549006 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:44.549013 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-10 01:14:44.549021 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:44.549029 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-10 01:14:44.549036 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:44.549044 | orchestrator | 2026-03-10 01:14:44.549052 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-10 01:14:44.549059 | orchestrator | Tuesday 10 March 2026 01:13:16 +0000 (0:00:04.796) 0:01:38.147 ********* 2026-03-10 01:14:44.549067 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:44.549074 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:44.549141 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:44.549153 | orchestrator | 2026-03-10 01:14:44.549161 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-10 01:14:44.549168 | orchestrator | Tuesday 10 March 2026 01:13:22 +0000 (0:00:05.827) 0:01:43.975 ********* 2026-03-10 01:14:44.549184 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:14:44.549220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:14:44.549243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-10 01:14:44.549259 | orchestrator | 2026-03-10 01:14:44.549272 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-10 01:14:44.549280 | orchestrator | Tuesday 10 March 2026 01:13:26 +0000 (0:00:04.201) 0:01:48.177 ********* 2026-03-10 01:14:44.549288 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:44.549295 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:44.549303 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:44.549316 | orchestrator | 2026-03-10 01:14:44.549324 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-10 01:14:44.549332 | orchestrator | Tuesday 10 March 2026 01:13:27 +0000 (0:00:00.327) 0:01:48.504 ********* 2026-03-10 01:14:44.549340 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:44.549348 | orchestrator | 2026-03-10 01:14:44.549355 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-10 01:14:44.549363 | orchestrator | Tuesday 10 March 2026 01:13:29 +0000 (0:00:02.291) 0:01:50.796 ********* 2026-03-10 01:14:44.549371 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:44.549378 | orchestrator | 2026-03-10 01:14:44.549386 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-10 01:14:44.549394 | orchestrator | Tuesday 10 March 2026 01:13:32 +0000 (0:00:02.538) 0:01:53.334 ********* 2026-03-10 01:14:44.549401 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:44.549409 | orchestrator | 2026-03-10 01:14:44.549417 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-10 01:14:44.549425 | orchestrator | Tuesday 10 March 2026 01:13:34 +0000 (0:00:02.317) 0:01:55.652 ********* 2026-03-10 01:14:44.549432 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:44.549440 | orchestrator | 2026-03-10 01:14:44.549448 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-10 01:14:44.549455 | orchestrator | Tuesday 10 March 2026 01:14:03 +0000 (0:00:29.103) 0:02:24.755 ********* 2026-03-10 01:14:44.549463 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:44.549471 | orchestrator | 2026-03-10 01:14:44.549480 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-10 01:14:44.549489 | orchestrator | Tuesday 10 March 2026 01:14:06 +0000 (0:00:02.838) 0:02:27.593 ********* 2026-03-10 01:14:44.549498 | orchestrator | 2026-03-10 01:14:44.549511 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-10 01:14:44.549520 | orchestrator | Tuesday 10 March 2026 01:14:06 +0000 (0:00:00.060) 0:02:27.654 ********* 2026-03-10 01:14:44.549529 | orchestrator | 2026-03-10 01:14:44.549539 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-10 01:14:44.549547 | orchestrator | Tuesday 10 March 2026 01:14:06 +0000 (0:00:00.064) 0:02:27.719 ********* 2026-03-10 01:14:44.549554 | orchestrator | 2026-03-10 01:14:44.549562 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-10 01:14:44.549570 | orchestrator | Tuesday 10 March 2026 01:14:06 +0000 (0:00:00.074) 0:02:27.793 ********* 2026-03-10 01:14:44.549577 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:44.549585 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:44.549593 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:44.549600 | orchestrator | 2026-03-10 01:14:44.549608 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:14:44.549617 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-10 01:14:44.549626 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-10 01:14:44.549633 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-10 01:14:44.549641 | orchestrator | 2026-03-10 01:14:44.549649 | orchestrator | 2026-03-10 01:14:44.549657 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:14:44.549665 | orchestrator | Tuesday 10 March 2026 01:14:43 +0000 (0:00:36.870) 0:03:04.664 ********* 2026-03-10 01:14:44.549673 | orchestrator | =============================================================================== 2026-03-10 01:14:44.549680 | orchestrator | glance : Restart glance-api container ---------------------------------- 36.87s 2026-03-10 01:14:44.549688 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.10s 2026-03-10 01:14:44.549700 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.89s 2026-03-10 01:14:44.549708 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 6.78s 2026-03-10 01:14:44.549716 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 5.83s 2026-03-10 01:14:44.549724 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 5.69s 2026-03-10 01:14:44.549732 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.17s 2026-03-10 01:14:44.549739 | orchestrator | glance : Copying over config.json files for services -------------------- 5.17s 2026-03-10 01:14:44.549751 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 5.06s 2026-03-10 01:14:44.549759 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 4.99s 2026-03-10 01:14:44.549766 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 4.80s 2026-03-10 01:14:44.549774 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.77s 2026-03-10 01:14:44.549782 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.64s 2026-03-10 01:14:44.549788 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.33s 2026-03-10 01:14:44.549795 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.30s 2026-03-10 01:14:44.549802 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 4.30s 2026-03-10 01:14:44.549808 | orchestrator | glance : Check glance containers ---------------------------------------- 4.20s 2026-03-10 01:14:44.549815 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.13s 2026-03-10 01:14:44.549821 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.04s 2026-03-10 01:14:44.549828 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.71s 2026-03-10 01:14:44.549834 | orchestrator | 2026-03-10 01:14:44 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:44.549841 | orchestrator | 2026-03-10 01:14:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:47.596782 | orchestrator | 2026-03-10 01:14:47 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:47.599035 | orchestrator | 2026-03-10 01:14:47 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:47.600057 | orchestrator | 2026-03-10 01:14:47 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:47.601238 | orchestrator | 2026-03-10 01:14:47 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:14:47.601270 | orchestrator | 2026-03-10 01:14:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:50.641684 | orchestrator | 2026-03-10 01:14:50 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:50.643165 | orchestrator | 2026-03-10 01:14:50 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:50.645258 | orchestrator | 2026-03-10 01:14:50 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:50.646304 | orchestrator | 2026-03-10 01:14:50 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:14:50.646338 | orchestrator | 2026-03-10 01:14:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:53.689902 | orchestrator | 2026-03-10 01:14:53 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state STARTED 2026-03-10 01:14:53.691370 | orchestrator | 2026-03-10 01:14:53 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:53.692837 | orchestrator | 2026-03-10 01:14:53 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:53.694708 | orchestrator | 2026-03-10 01:14:53 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:14:53.694790 | orchestrator | 2026-03-10 01:14:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:56.734516 | orchestrator | 2026-03-10 01:14:56 | INFO  | Task acba4b7d-c2ac-4200-9ddd-4907bd324c35 is in state SUCCESS 2026-03-10 01:14:56.736911 | orchestrator | 2026-03-10 01:14:56.736967 | orchestrator | 2026-03-10 01:14:56.736978 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:14:56.736986 | orchestrator | 2026-03-10 01:14:56.736994 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:14:56.737002 | orchestrator | Tuesday 10 March 2026 01:11:54 +0000 (0:00:00.313) 0:00:00.313 ********* 2026-03-10 01:14:56.737009 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:14:56.737018 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:14:56.737025 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:14:56.737032 | orchestrator | 2026-03-10 01:14:56.737039 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:14:56.737046 | orchestrator | Tuesday 10 March 2026 01:11:54 +0000 (0:00:00.339) 0:00:00.653 ********* 2026-03-10 01:14:56.737053 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-10 01:14:56.737059 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-10 01:14:56.737065 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-10 01:14:56.737121 | orchestrator | 2026-03-10 01:14:56.737129 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-10 01:14:56.737136 | orchestrator | 2026-03-10 01:14:56.737141 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-10 01:14:56.737147 | orchestrator | Tuesday 10 March 2026 01:11:55 +0000 (0:00:00.818) 0:00:01.471 ********* 2026-03-10 01:14:56.737153 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:56.737160 | orchestrator | 2026-03-10 01:14:56.737183 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-10 01:14:56.737191 | orchestrator | Tuesday 10 March 2026 01:11:56 +0000 (0:00:00.865) 0:00:02.336 ********* 2026-03-10 01:14:56.737199 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-10 01:14:56.737206 | orchestrator | 2026-03-10 01:14:56.737212 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-10 01:14:56.737220 | orchestrator | Tuesday 10 March 2026 01:11:59 +0000 (0:00:03.620) 0:00:05.957 ********* 2026-03-10 01:14:56.737228 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-10 01:14:56.737235 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-10 01:14:56.737243 | orchestrator | 2026-03-10 01:14:56.737250 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-10 01:14:56.737257 | orchestrator | Tuesday 10 March 2026 01:12:06 +0000 (0:00:06.967) 0:00:12.924 ********* 2026-03-10 01:14:56.737265 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:14:56.737272 | orchestrator | 2026-03-10 01:14:56.737279 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-10 01:14:56.737286 | orchestrator | Tuesday 10 March 2026 01:12:10 +0000 (0:00:03.496) 0:00:16.421 ********* 2026-03-10 01:14:56.737293 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-10 01:14:56.737301 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:14:56.737308 | orchestrator | 2026-03-10 01:14:56.737315 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-10 01:14:56.737322 | orchestrator | Tuesday 10 March 2026 01:12:14 +0000 (0:00:04.166) 0:00:20.587 ********* 2026-03-10 01:14:56.737329 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:14:56.737424 | orchestrator | 2026-03-10 01:14:56.737437 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-10 01:14:56.737445 | orchestrator | Tuesday 10 March 2026 01:12:17 +0000 (0:00:03.567) 0:00:24.155 ********* 2026-03-10 01:14:56.737452 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-10 01:14:56.737459 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-10 01:14:56.737466 | orchestrator | 2026-03-10 01:14:56.737473 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-10 01:14:56.737746 | orchestrator | Tuesday 10 March 2026 01:12:25 +0000 (0:00:07.598) 0:00:31.753 ********* 2026-03-10 01:14:56.737762 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.737787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.737803 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.737813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.737908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.737923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.737931 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.737946 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.737958 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.737965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.737977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.737985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.737991 | orchestrator | 2026-03-10 01:14:56.737998 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-10 01:14:56.738005 | orchestrator | Tuesday 10 March 2026 01:12:28 +0000 (0:00:02.571) 0:00:34.324 ********* 2026-03-10 01:14:56.738057 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:56.738093 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:56.738102 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:56.738108 | orchestrator | 2026-03-10 01:14:56.738116 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-10 01:14:56.738123 | orchestrator | Tuesday 10 March 2026 01:12:28 +0000 (0:00:00.373) 0:00:34.698 ********* 2026-03-10 01:14:56.738130 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:56.738138 | orchestrator | 2026-03-10 01:14:56.738338 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-10 01:14:56.738348 | orchestrator | Tuesday 10 March 2026 01:12:29 +0000 (0:00:01.165) 0:00:35.864 ********* 2026-03-10 01:14:56.738377 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-10 01:14:56.738385 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-10 01:14:56.738393 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-10 01:14:56.738400 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-10 01:14:56.738407 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-10 01:14:56.738414 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-10 01:14:56.738421 | orchestrator | 2026-03-10 01:14:56.738428 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-10 01:14:56.738435 | orchestrator | Tuesday 10 March 2026 01:12:32 +0000 (0:00:02.400) 0:00:38.265 ********* 2026-03-10 01:14:56.738450 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-10 01:14:56.738468 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-10 01:14:56.738477 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-10 01:14:56.738485 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-10 01:14:56.738509 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-10 01:14:56.738522 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-10 01:14:56.738535 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-10 01:14:56.738543 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-10 01:14:56.738550 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-10 01:14:56.738573 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-10 01:14:56.738582 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-10 01:14:56.738598 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-10 01:14:56.738605 | orchestrator | 2026-03-10 01:14:56.738613 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-10 01:14:56.738620 | orchestrator | Tuesday 10 March 2026 01:12:36 +0000 (0:00:04.742) 0:00:43.007 ********* 2026-03-10 01:14:56.738628 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:14:56.738635 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:14:56.738642 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-10 01:14:56.738650 | orchestrator | 2026-03-10 01:14:56.738657 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-10 01:14:56.738664 | orchestrator | Tuesday 10 March 2026 01:12:39 +0000 (0:00:02.658) 0:00:45.666 ********* 2026-03-10 01:14:56.738671 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-10 01:14:56.738678 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-10 01:14:56.738685 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-10 01:14:56.738710 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:14:56.738718 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:14:56.738726 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-10 01:14:56.738733 | orchestrator | 2026-03-10 01:14:56.738741 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-10 01:14:56.738748 | orchestrator | Tuesday 10 March 2026 01:12:42 +0000 (0:00:03.324) 0:00:48.990 ********* 2026-03-10 01:14:56.738756 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-10 01:14:56.738763 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-10 01:14:56.738771 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-10 01:14:56.738778 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-10 01:14:56.738786 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-10 01:14:56.738793 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-10 01:14:56.738800 | orchestrator | 2026-03-10 01:14:56.738808 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-10 01:14:56.738816 | orchestrator | Tuesday 10 March 2026 01:12:44 +0000 (0:00:01.184) 0:00:50.175 ********* 2026-03-10 01:14:56.738823 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:56.738830 | orchestrator | 2026-03-10 01:14:56.738838 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-10 01:14:56.738845 | orchestrator | Tuesday 10 March 2026 01:12:44 +0000 (0:00:00.167) 0:00:50.343 ********* 2026-03-10 01:14:56.738853 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:56.738860 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:56.738868 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:56.738875 | orchestrator | 2026-03-10 01:14:56.738883 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-10 01:14:56.738890 | orchestrator | Tuesday 10 March 2026 01:12:44 +0000 (0:00:00.440) 0:00:50.784 ********* 2026-03-10 01:14:56.738904 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:14:56.738912 | orchestrator | 2026-03-10 01:14:56.738919 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-10 01:14:56.738942 | orchestrator | Tuesday 10 March 2026 01:12:45 +0000 (0:00:01.111) 0:00:51.896 ********* 2026-03-10 01:14:56.738950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.738965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.738974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.738983 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.738991 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739011 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739024 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739042 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739102 | orchestrator | 2026-03-10 01:14:56.739109 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-10 01:14:56.739115 | orchestrator | Tuesday 10 March 2026 01:12:50 +0000 (0:00:04.866) 0:00:56.762 ********* 2026-03-10 01:14:56.739126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:14:56.739134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:14:56.739170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739187 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:56.739199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739216 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:56.739224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:14:56.739238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739262 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739270 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:56.739279 | orchestrator | 2026-03-10 01:14:56.739287 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-10 01:14:56.739295 | orchestrator | Tuesday 10 March 2026 01:12:51 +0000 (0:00:00.852) 0:00:57.614 ********* 2026-03-10 01:14:56.739303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:14:56.739311 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739339 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:56.739350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:14:56.739358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739365 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739385 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:56.739393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:14:56.739404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739423 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739431 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:56.739438 | orchestrator | 2026-03-10 01:14:56.739445 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-10 01:14:56.739450 | orchestrator | Tuesday 10 March 2026 01:12:53 +0000 (0:00:01.608) 0:00:59.223 ********* 2026-03-10 01:14:56.739456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.739468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.739480 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.739491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739529 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739579 | orchestrator | 2026-03-10 01:14:56.739586 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-10 01:14:56.739593 | orchestrator | Tuesday 10 March 2026 01:12:58 +0000 (0:00:04.948) 0:01:04.172 ********* 2026-03-10 01:14:56.739601 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-10 01:14:56.739608 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-10 01:14:56.739615 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-10 01:14:56.739622 | orchestrator | 2026-03-10 01:14:56.739629 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-10 01:14:56.739636 | orchestrator | Tuesday 10 March 2026 01:13:00 +0000 (0:00:02.407) 0:01:06.579 ********* 2026-03-10 01:14:56.739648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.739656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.739666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.739679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739705 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739735 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739743 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.739757 | orchestrator | 2026-03-10 01:14:56.739764 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-10 01:14:56.739771 | orchestrator | Tuesday 10 March 2026 01:13:16 +0000 (0:00:16.122) 0:01:22.702 ********* 2026-03-10 01:14:56.739779 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:56.739786 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:56.739793 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:56.739800 | orchestrator | 2026-03-10 01:14:56.739807 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-10 01:14:56.739817 | orchestrator | Tuesday 10 March 2026 01:13:18 +0000 (0:00:02.387) 0:01:25.090 ********* 2026-03-10 01:14:56.739825 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:14:56.739835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739850 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739857 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739865 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:56.739872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:14:56.739885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739916 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:56.739924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-10 01:14:56.739931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739939 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-10 01:14:56.739955 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:56.739965 | orchestrator | 2026-03-10 01:14:56.739971 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-10 01:14:56.739977 | orchestrator | Tuesday 10 March 2026 01:13:20 +0000 (0:00:01.117) 0:01:26.207 ********* 2026-03-10 01:14:56.739983 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:56.739989 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:56.739995 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:56.740001 | orchestrator | 2026-03-10 01:14:56.740007 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-10 01:14:56.740013 | orchestrator | Tuesday 10 March 2026 01:13:20 +0000 (0:00:00.510) 0:01:26.718 ********* 2026-03-10 01:14:56.740023 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.740031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.740038 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-10 01:14:56.740050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.740058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.740114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.740124 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.740132 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.740140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.740152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.740168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.740180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-10 01:14:56.740188 | orchestrator | 2026-03-10 01:14:56.740196 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-10 01:14:56.740203 | orchestrator | Tuesday 10 March 2026 01:13:24 +0000 (0:00:03.804) 0:01:30.522 ********* 2026-03-10 01:14:56.740210 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:56.740218 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:14:56.740225 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:14:56.740233 | orchestrator | 2026-03-10 01:14:56.740240 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-10 01:14:56.740248 | orchestrator | Tuesday 10 March 2026 01:13:25 +0000 (0:00:00.918) 0:01:31.441 ********* 2026-03-10 01:14:56.740255 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:56.740263 | orchestrator | 2026-03-10 01:14:56.740270 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-10 01:14:56.740277 | orchestrator | Tuesday 10 March 2026 01:13:27 +0000 (0:00:02.485) 0:01:33.926 ********* 2026-03-10 01:14:56.740285 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:56.740292 | orchestrator | 2026-03-10 01:14:56.740300 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-10 01:14:56.740307 | orchestrator | Tuesday 10 March 2026 01:13:30 +0000 (0:00:02.542) 0:01:36.468 ********* 2026-03-10 01:14:56.740314 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:56.740322 | orchestrator | 2026-03-10 01:14:56.740330 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-10 01:14:56.740337 | orchestrator | Tuesday 10 March 2026 01:13:50 +0000 (0:00:19.856) 0:01:56.325 ********* 2026-03-10 01:14:56.740345 | orchestrator | 2026-03-10 01:14:56.740351 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-10 01:14:56.740359 | orchestrator | Tuesday 10 March 2026 01:13:50 +0000 (0:00:00.071) 0:01:56.396 ********* 2026-03-10 01:14:56.740366 | orchestrator | 2026-03-10 01:14:56.740373 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-10 01:14:56.740381 | orchestrator | Tuesday 10 March 2026 01:13:50 +0000 (0:00:00.089) 0:01:56.486 ********* 2026-03-10 01:14:56.740388 | orchestrator | 2026-03-10 01:14:56.740396 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-10 01:14:56.740403 | orchestrator | Tuesday 10 March 2026 01:13:50 +0000 (0:00:00.090) 0:01:56.576 ********* 2026-03-10 01:14:56.740411 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:56.740418 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:56.740425 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:56.740438 | orchestrator | 2026-03-10 01:14:56.740445 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-10 01:14:56.740453 | orchestrator | Tuesday 10 March 2026 01:14:18 +0000 (0:00:27.638) 0:02:24.215 ********* 2026-03-10 01:14:56.740460 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:56.740468 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:56.740475 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:56.740482 | orchestrator | 2026-03-10 01:14:56.740490 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-10 01:14:56.740498 | orchestrator | Tuesday 10 March 2026 01:14:25 +0000 (0:00:07.797) 0:02:32.012 ********* 2026-03-10 01:14:56.740505 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:56.740513 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:56.740520 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:56.740527 | orchestrator | 2026-03-10 01:14:56.740534 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-10 01:14:56.740542 | orchestrator | Tuesday 10 March 2026 01:14:47 +0000 (0:00:21.420) 0:02:53.433 ********* 2026-03-10 01:14:56.740549 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:14:56.740557 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:14:56.740564 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:14:56.740572 | orchestrator | 2026-03-10 01:14:56.740579 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-10 01:14:56.740591 | orchestrator | Tuesday 10 March 2026 01:14:53 +0000 (0:00:06.724) 0:03:00.157 ********* 2026-03-10 01:14:56.740599 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:14:56.740606 | orchestrator | 2026-03-10 01:14:56.740614 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:14:56.740622 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-10 01:14:56.740630 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:14:56.740637 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:14:56.740645 | orchestrator | 2026-03-10 01:14:56.740652 | orchestrator | 2026-03-10 01:14:56.740659 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:14:56.740667 | orchestrator | Tuesday 10 March 2026 01:14:54 +0000 (0:00:00.304) 0:03:00.462 ********* 2026-03-10 01:14:56.740674 | orchestrator | =============================================================================== 2026-03-10 01:14:56.740682 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 27.64s 2026-03-10 01:14:56.740690 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 21.42s 2026-03-10 01:14:56.740700 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 19.86s 2026-03-10 01:14:56.740708 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 16.12s 2026-03-10 01:14:56.740716 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 7.80s 2026-03-10 01:14:56.740723 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 7.60s 2026-03-10 01:14:56.740731 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.97s 2026-03-10 01:14:56.740738 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 6.72s 2026-03-10 01:14:56.740745 | orchestrator | cinder : Copying over config.json files for services -------------------- 4.95s 2026-03-10 01:14:56.740753 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 4.87s 2026-03-10 01:14:56.740760 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.74s 2026-03-10 01:14:56.740767 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.17s 2026-03-10 01:14:56.740774 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.80s 2026-03-10 01:14:56.740788 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.62s 2026-03-10 01:14:56.740796 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.57s 2026-03-10 01:14:56.740803 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.50s 2026-03-10 01:14:56.740811 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.32s 2026-03-10 01:14:56.740818 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 2.66s 2026-03-10 01:14:56.740826 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.57s 2026-03-10 01:14:56.740834 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.54s 2026-03-10 01:14:56.742092 | orchestrator | 2026-03-10 01:14:56 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:56.747013 | orchestrator | 2026-03-10 01:14:56 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:14:56.750879 | orchestrator | 2026-03-10 01:14:56 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:56.755904 | orchestrator | 2026-03-10 01:14:56 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:14:56.756654 | orchestrator | 2026-03-10 01:14:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:14:59.803157 | orchestrator | 2026-03-10 01:14:59 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:14:59.803254 | orchestrator | 2026-03-10 01:14:59 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:14:59.803264 | orchestrator | 2026-03-10 01:14:59 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:14:59.803989 | orchestrator | 2026-03-10 01:14:59 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:14:59.804001 | orchestrator | 2026-03-10 01:14:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:02.846393 | orchestrator | 2026-03-10 01:15:02 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:02.847720 | orchestrator | 2026-03-10 01:15:02 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:02.849997 | orchestrator | 2026-03-10 01:15:02 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:15:02.852773 | orchestrator | 2026-03-10 01:15:02 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:02.852832 | orchestrator | 2026-03-10 01:15:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:05.909190 | orchestrator | 2026-03-10 01:15:05 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:05.911720 | orchestrator | 2026-03-10 01:15:05 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:05.914196 | orchestrator | 2026-03-10 01:15:05 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:15:05.916725 | orchestrator | 2026-03-10 01:15:05 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:05.916766 | orchestrator | 2026-03-10 01:15:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:08.962376 | orchestrator | 2026-03-10 01:15:08 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:08.962474 | orchestrator | 2026-03-10 01:15:08 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:08.962895 | orchestrator | 2026-03-10 01:15:08 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:15:08.963919 | orchestrator | 2026-03-10 01:15:08 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:08.963959 | orchestrator | 2026-03-10 01:15:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:12.012402 | orchestrator | 2026-03-10 01:15:12 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:12.017420 | orchestrator | 2026-03-10 01:15:12 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:12.021177 | orchestrator | 2026-03-10 01:15:12 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:15:12.023693 | orchestrator | 2026-03-10 01:15:12 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:12.023764 | orchestrator | 2026-03-10 01:15:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:15.074265 | orchestrator | 2026-03-10 01:15:15 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:15.075592 | orchestrator | 2026-03-10 01:15:15 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:15.077120 | orchestrator | 2026-03-10 01:15:15 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state STARTED 2026-03-10 01:15:15.079580 | orchestrator | 2026-03-10 01:15:15 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:15.079634 | orchestrator | 2026-03-10 01:15:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:18.122800 | orchestrator | 2026-03-10 01:15:18 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:18.124810 | orchestrator | 2026-03-10 01:15:18 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:18.125796 | orchestrator | 2026-03-10 01:15:18 | INFO  | Task 3b2fcbb1-e599-479a-923b-32905ec94682 is in state SUCCESS 2026-03-10 01:15:18.127174 | orchestrator | 2026-03-10 01:15:18 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:18.127224 | orchestrator | 2026-03-10 01:15:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:21.170568 | orchestrator | 2026-03-10 01:15:21 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:21.172778 | orchestrator | 2026-03-10 01:15:21 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:21.174335 | orchestrator | 2026-03-10 01:15:21 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:21.174401 | orchestrator | 2026-03-10 01:15:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:24.223545 | orchestrator | 2026-03-10 01:15:24 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:24.223977 | orchestrator | 2026-03-10 01:15:24 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:24.225013 | orchestrator | 2026-03-10 01:15:24 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:24.225109 | orchestrator | 2026-03-10 01:15:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:27.261288 | orchestrator | 2026-03-10 01:15:27 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:27.262577 | orchestrator | 2026-03-10 01:15:27 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:27.264421 | orchestrator | 2026-03-10 01:15:27 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:27.264486 | orchestrator | 2026-03-10 01:15:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:30.303595 | orchestrator | 2026-03-10 01:15:30 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:30.304755 | orchestrator | 2026-03-10 01:15:30 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:30.305655 | orchestrator | 2026-03-10 01:15:30 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:30.305709 | orchestrator | 2026-03-10 01:15:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:33.357575 | orchestrator | 2026-03-10 01:15:33 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:33.359023 | orchestrator | 2026-03-10 01:15:33 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:33.361870 | orchestrator | 2026-03-10 01:15:33 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:33.361923 | orchestrator | 2026-03-10 01:15:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:36.400860 | orchestrator | 2026-03-10 01:15:36 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:36.404560 | orchestrator | 2026-03-10 01:15:36 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:36.405346 | orchestrator | 2026-03-10 01:15:36 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:36.405361 | orchestrator | 2026-03-10 01:15:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:39.443017 | orchestrator | 2026-03-10 01:15:39 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:39.444198 | orchestrator | 2026-03-10 01:15:39 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:39.447712 | orchestrator | 2026-03-10 01:15:39 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:39.447753 | orchestrator | 2026-03-10 01:15:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:42.492635 | orchestrator | 2026-03-10 01:15:42 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:42.492729 | orchestrator | 2026-03-10 01:15:42 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:42.495310 | orchestrator | 2026-03-10 01:15:42 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:42.495366 | orchestrator | 2026-03-10 01:15:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:45.551840 | orchestrator | 2026-03-10 01:15:45 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:45.554968 | orchestrator | 2026-03-10 01:15:45 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:45.556499 | orchestrator | 2026-03-10 01:15:45 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:45.556540 | orchestrator | 2026-03-10 01:15:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:48.612782 | orchestrator | 2026-03-10 01:15:48 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:48.615055 | orchestrator | 2026-03-10 01:15:48 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:48.618139 | orchestrator | 2026-03-10 01:15:48 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:48.618224 | orchestrator | 2026-03-10 01:15:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:51.670272 | orchestrator | 2026-03-10 01:15:51 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:51.671880 | orchestrator | 2026-03-10 01:15:51 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:51.674111 | orchestrator | 2026-03-10 01:15:51 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:51.674155 | orchestrator | 2026-03-10 01:15:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:54.724351 | orchestrator | 2026-03-10 01:15:54 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:54.727537 | orchestrator | 2026-03-10 01:15:54 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:54.729579 | orchestrator | 2026-03-10 01:15:54 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:54.729622 | orchestrator | 2026-03-10 01:15:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:15:57.778385 | orchestrator | 2026-03-10 01:15:57 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:15:57.780505 | orchestrator | 2026-03-10 01:15:57 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:15:57.781980 | orchestrator | 2026-03-10 01:15:57 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:15:57.782078 | orchestrator | 2026-03-10 01:15:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:00.835992 | orchestrator | 2026-03-10 01:16:00 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:00.836114 | orchestrator | 2026-03-10 01:16:00 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:00.838140 | orchestrator | 2026-03-10 01:16:00 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:00.838193 | orchestrator | 2026-03-10 01:16:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:03.891799 | orchestrator | 2026-03-10 01:16:03 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:03.895592 | orchestrator | 2026-03-10 01:16:03 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:03.897853 | orchestrator | 2026-03-10 01:16:03 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:03.897913 | orchestrator | 2026-03-10 01:16:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:06.940469 | orchestrator | 2026-03-10 01:16:06 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:06.945445 | orchestrator | 2026-03-10 01:16:06 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:06.947856 | orchestrator | 2026-03-10 01:16:06 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:06.948175 | orchestrator | 2026-03-10 01:16:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:09.998714 | orchestrator | 2026-03-10 01:16:09 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:10.000159 | orchestrator | 2026-03-10 01:16:10 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:10.002457 | orchestrator | 2026-03-10 01:16:10 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:10.002501 | orchestrator | 2026-03-10 01:16:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:13.049842 | orchestrator | 2026-03-10 01:16:13 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:13.051605 | orchestrator | 2026-03-10 01:16:13 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:13.052777 | orchestrator | 2026-03-10 01:16:13 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:13.052813 | orchestrator | 2026-03-10 01:16:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:16.104488 | orchestrator | 2026-03-10 01:16:16 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:16.104576 | orchestrator | 2026-03-10 01:16:16 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:16.104585 | orchestrator | 2026-03-10 01:16:16 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:16.104593 | orchestrator | 2026-03-10 01:16:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:19.146199 | orchestrator | 2026-03-10 01:16:19 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:19.147402 | orchestrator | 2026-03-10 01:16:19 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:19.148837 | orchestrator | 2026-03-10 01:16:19 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:19.148874 | orchestrator | 2026-03-10 01:16:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:22.191567 | orchestrator | 2026-03-10 01:16:22 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:22.192534 | orchestrator | 2026-03-10 01:16:22 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:22.193521 | orchestrator | 2026-03-10 01:16:22 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:22.193546 | orchestrator | 2026-03-10 01:16:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:25.231286 | orchestrator | 2026-03-10 01:16:25 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:25.232566 | orchestrator | 2026-03-10 01:16:25 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:25.235130 | orchestrator | 2026-03-10 01:16:25 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:25.235173 | orchestrator | 2026-03-10 01:16:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:28.293127 | orchestrator | 2026-03-10 01:16:28 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:28.296143 | orchestrator | 2026-03-10 01:16:28 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:28.298909 | orchestrator | 2026-03-10 01:16:28 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:28.298978 | orchestrator | 2026-03-10 01:16:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:31.347392 | orchestrator | 2026-03-10 01:16:31 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:31.348545 | orchestrator | 2026-03-10 01:16:31 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:31.350165 | orchestrator | 2026-03-10 01:16:31 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:31.350202 | orchestrator | 2026-03-10 01:16:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:34.393070 | orchestrator | 2026-03-10 01:16:34 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:34.393240 | orchestrator | 2026-03-10 01:16:34 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:34.393307 | orchestrator | 2026-03-10 01:16:34 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:34.393317 | orchestrator | 2026-03-10 01:16:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:37.436315 | orchestrator | 2026-03-10 01:16:37 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:37.436786 | orchestrator | 2026-03-10 01:16:37 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:37.437508 | orchestrator | 2026-03-10 01:16:37 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:37.437549 | orchestrator | 2026-03-10 01:16:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:40.482851 | orchestrator | 2026-03-10 01:16:40 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:40.483896 | orchestrator | 2026-03-10 01:16:40 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state STARTED 2026-03-10 01:16:40.485289 | orchestrator | 2026-03-10 01:16:40 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:40.485351 | orchestrator | 2026-03-10 01:16:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:43.525656 | orchestrator | 2026-03-10 01:16:43 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:43.527599 | orchestrator | 2026-03-10 01:16:43 | INFO  | Task 6464838c-a591-40ce-a3c0-e1c6a21084f8 is in state SUCCESS 2026-03-10 01:16:43.530506 | orchestrator | 2026-03-10 01:16:43 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:43.531215 | orchestrator | 2026-03-10 01:16:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:46.577172 | orchestrator | 2026-03-10 01:16:46 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:16:46.577558 | orchestrator | 2026-03-10 01:16:46 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:46.579192 | orchestrator | 2026-03-10 01:16:46 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:46.579253 | orchestrator | 2026-03-10 01:16:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:49.625680 | orchestrator | 2026-03-10 01:16:49 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:16:49.626276 | orchestrator | 2026-03-10 01:16:49 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:49.627595 | orchestrator | 2026-03-10 01:16:49 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:49.627641 | orchestrator | 2026-03-10 01:16:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:52.677466 | orchestrator | 2026-03-10 01:16:52 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:16:52.678100 | orchestrator | 2026-03-10 01:16:52 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:52.679100 | orchestrator | 2026-03-10 01:16:52 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state STARTED 2026-03-10 01:16:52.679130 | orchestrator | 2026-03-10 01:16:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:55.725786 | orchestrator | 2026-03-10 01:16:55 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:16:55.727296 | orchestrator | 2026-03-10 01:16:55 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:55.730433 | orchestrator | 2026-03-10 01:16:55 | INFO  | Task 25208d12-72d3-45bc-9cd9-3e16688aa162 is in state SUCCESS 2026-03-10 01:16:55.735824 | orchestrator | 2026-03-10 01:16:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:16:55.736894 | orchestrator | 2026-03-10 01:16:55.736937 | orchestrator | 2026-03-10 01:16:55.736949 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-10 01:16:55.736961 | orchestrator | 2026-03-10 01:16:55.737003 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-10 01:16:55.737016 | orchestrator | Tuesday 10 March 2026 01:08:20 +0000 (0:00:00.253) 0:00:00.253 ********* 2026-03-10 01:16:55.737027 | orchestrator | changed: [localhost] 2026-03-10 01:16:55.737039 | orchestrator | 2026-03-10 01:16:55.737050 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-10 01:16:55.737061 | orchestrator | Tuesday 10 March 2026 01:08:23 +0000 (0:00:02.741) 0:00:02.995 ********* 2026-03-10 01:16:55.737072 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2026-03-10 01:16:55.737082 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (2 retries left). 2026-03-10 01:16:55.737093 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (1 retries left). 2026-03-10 01:16:55.737104 | orchestrator | 2026-03-10 01:16:55.737115 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:16:55.737132 | orchestrator | 2026-03-10 01:16:55.737151 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:16:55.737169 | orchestrator | 2026-03-10 01:16:55.737188 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:16:55.737208 | orchestrator | 2026-03-10 01:16:55.737227 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:16:55.737245 | orchestrator | 2026-03-10 01:16:55.737262 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:16:55.737273 | orchestrator | 2026-03-10 01:16:55.737284 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:16:55.737294 | orchestrator | 2026-03-10 01:16:55.737305 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:16:55.737316 | orchestrator | 2026-03-10 01:16:55.737327 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:16:55.737339 | orchestrator | 2026-03-10 01:16:55.737350 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2026-03-10 01:16:55.737360 | orchestrator | changed: [localhost] 2026-03-10 01:16:55.737371 | orchestrator | 2026-03-10 01:16:55.737382 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-10 01:16:55.737393 | orchestrator | Tuesday 10 March 2026 01:15:09 +0000 (0:06:46.033) 0:06:49.028 ********* 2026-03-10 01:16:55.737403 | orchestrator | changed: [localhost] 2026-03-10 01:16:55.737414 | orchestrator | 2026-03-10 01:16:55.737424 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:16:55.737435 | orchestrator | 2026-03-10 01:16:55.737446 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:16:55.737457 | orchestrator | Tuesday 10 March 2026 01:15:14 +0000 (0:00:05.332) 0:06:54.361 ********* 2026-03-10 01:16:55.737467 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:16:55.737479 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:16:55.737491 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:16:55.737503 | orchestrator | 2026-03-10 01:16:55.737516 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:16:55.737528 | orchestrator | Tuesday 10 March 2026 01:15:15 +0000 (0:00:00.321) 0:06:54.683 ********* 2026-03-10 01:16:55.737541 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-10 01:16:55.737554 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-10 01:16:55.737592 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-10 01:16:55.737606 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-10 01:16:55.737618 | orchestrator | 2026-03-10 01:16:55.737631 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-10 01:16:55.737643 | orchestrator | skipping: no hosts matched 2026-03-10 01:16:55.737654 | orchestrator | 2026-03-10 01:16:55.737665 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:16:55.737676 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:16:55.737690 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:16:55.737702 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:16:55.737713 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:16:55.737723 | orchestrator | 2026-03-10 01:16:55.737737 | orchestrator | 2026-03-10 01:16:55.737755 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:16:55.737783 | orchestrator | Tuesday 10 March 2026 01:15:15 +0000 (0:00:00.683) 0:06:55.366 ********* 2026-03-10 01:16:55.737803 | orchestrator | =============================================================================== 2026-03-10 01:16:55.737820 | orchestrator | Download ironic-agent initramfs --------------------------------------- 406.03s 2026-03-10 01:16:55.737838 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.33s 2026-03-10 01:16:55.737855 | orchestrator | Ensure the destination directory exists --------------------------------- 2.74s 2026-03-10 01:16:55.737873 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.68s 2026-03-10 01:16:55.737893 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.32s 2026-03-10 01:16:55.737912 | orchestrator | 2026-03-10 01:16:55.737930 | orchestrator | 2026-03-10 01:16:55.737948 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:16:55.738071 | orchestrator | 2026-03-10 01:16:55.738103 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:16:55.738131 | orchestrator | Tuesday 10 March 2026 01:14:59 +0000 (0:00:00.195) 0:00:00.195 ********* 2026-03-10 01:16:55.738143 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:16:55.738154 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:16:55.738171 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:16:55.738190 | orchestrator | 2026-03-10 01:16:55.738209 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:16:55.738229 | orchestrator | Tuesday 10 March 2026 01:14:59 +0000 (0:00:00.306) 0:00:00.502 ********* 2026-03-10 01:16:55.738249 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-10 01:16:55.738269 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-10 01:16:55.738289 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-10 01:16:55.738301 | orchestrator | 2026-03-10 01:16:55.738311 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-10 01:16:55.738322 | orchestrator | 2026-03-10 01:16:55.738333 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-10 01:16:55.738343 | orchestrator | Tuesday 10 March 2026 01:15:00 +0000 (0:00:00.716) 0:00:01.218 ********* 2026-03-10 01:16:55.738354 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:16:55.738365 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:16:55.738375 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:16:55.738386 | orchestrator | 2026-03-10 01:16:55.738397 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:16:55.738408 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:16:55.738431 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:16:55.738442 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:16:55.738453 | orchestrator | 2026-03-10 01:16:55.738463 | orchestrator | 2026-03-10 01:16:55.738474 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:16:55.738493 | orchestrator | Tuesday 10 March 2026 01:16:42 +0000 (0:01:41.759) 0:01:42.978 ********* 2026-03-10 01:16:55.738522 | orchestrator | =============================================================================== 2026-03-10 01:16:55.738541 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 101.76s 2026-03-10 01:16:55.738558 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.72s 2026-03-10 01:16:55.738575 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2026-03-10 01:16:55.738592 | orchestrator | 2026-03-10 01:16:55.738608 | orchestrator | 2026-03-10 01:16:55.738626 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:16:55.738644 | orchestrator | 2026-03-10 01:16:55.738663 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:16:55.738682 | orchestrator | Tuesday 10 March 2026 01:14:48 +0000 (0:00:00.288) 0:00:00.288 ********* 2026-03-10 01:16:55.738701 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:16:55.738720 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:16:55.738736 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:16:55.738747 | orchestrator | 2026-03-10 01:16:55.738758 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:16:55.738769 | orchestrator | Tuesday 10 March 2026 01:14:49 +0000 (0:00:00.387) 0:00:00.675 ********* 2026-03-10 01:16:55.738779 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-10 01:16:55.738790 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-10 01:16:55.738801 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-10 01:16:55.738812 | orchestrator | 2026-03-10 01:16:55.738823 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-10 01:16:55.738833 | orchestrator | 2026-03-10 01:16:55.738844 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-10 01:16:55.738855 | orchestrator | Tuesday 10 March 2026 01:14:49 +0000 (0:00:00.515) 0:00:01.191 ********* 2026-03-10 01:16:55.738866 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:16:55.738877 | orchestrator | 2026-03-10 01:16:55.738887 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-10 01:16:55.738898 | orchestrator | Tuesday 10 March 2026 01:14:50 +0000 (0:00:00.628) 0:00:01.820 ********* 2026-03-10 01:16:55.738912 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.738951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.739005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.739018 | orchestrator | 2026-03-10 01:16:55.739029 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-10 01:16:55.739040 | orchestrator | Tuesday 10 March 2026 01:14:51 +0000 (0:00:00.992) 0:00:02.812 ********* 2026-03-10 01:16:55.739050 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-10 01:16:55.739062 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-10 01:16:55.739072 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:16:55.739083 | orchestrator | 2026-03-10 01:16:55.739094 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-10 01:16:55.739105 | orchestrator | Tuesday 10 March 2026 01:14:52 +0000 (0:00:00.971) 0:00:03.784 ********* 2026-03-10 01:16:55.739115 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:16:55.739126 | orchestrator | 2026-03-10 01:16:55.739137 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-10 01:16:55.739147 | orchestrator | Tuesday 10 March 2026 01:14:53 +0000 (0:00:00.869) 0:00:04.654 ********* 2026-03-10 01:16:55.739158 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.739170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.739182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.739200 | orchestrator | 2026-03-10 01:16:55.739219 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-10 01:16:55.739236 | orchestrator | Tuesday 10 March 2026 01:14:54 +0000 (0:00:01.554) 0:00:06.209 ********* 2026-03-10 01:16:55.739274 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 01:16:55.739295 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 01:16:55.739314 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:16:55.739333 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:16:55.739344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 01:16:55.739356 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:16:55.739366 | orchestrator | 2026-03-10 01:16:55.739377 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-10 01:16:55.739388 | orchestrator | Tuesday 10 March 2026 01:14:55 +0000 (0:00:00.428) 0:00:06.638 ********* 2026-03-10 01:16:55.739398 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 01:16:55.739409 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:16:55.739421 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 01:16:55.739442 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:16:55.739466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-10 01:16:55.739478 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:16:55.739489 | orchestrator | 2026-03-10 01:16:55.739500 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-10 01:16:55.739511 | orchestrator | Tuesday 10 March 2026 01:14:56 +0000 (0:00:00.893) 0:00:07.531 ********* 2026-03-10 01:16:55.739522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.739534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.739546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.739557 | orchestrator | 2026-03-10 01:16:55.739568 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-10 01:16:55.739579 | orchestrator | Tuesday 10 March 2026 01:14:57 +0000 (0:00:01.290) 0:00:08.822 ********* 2026-03-10 01:16:55.739590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.739609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.739633 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.739646 | orchestrator | 2026-03-10 01:16:55.739657 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-10 01:16:55.739668 | orchestrator | Tuesday 10 March 2026 01:14:58 +0000 (0:00:01.472) 0:00:10.295 ********* 2026-03-10 01:16:55.739679 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:16:55.739690 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:16:55.739701 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:16:55.739711 | orchestrator | 2026-03-10 01:16:55.739722 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-10 01:16:55.739733 | orchestrator | Tuesday 10 March 2026 01:14:59 +0000 (0:00:00.556) 0:00:10.851 ********* 2026-03-10 01:16:55.739744 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-10 01:16:55.739755 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-10 01:16:55.739765 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-10 01:16:55.739776 | orchestrator | 2026-03-10 01:16:55.739787 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-10 01:16:55.739798 | orchestrator | Tuesday 10 March 2026 01:15:00 +0000 (0:00:01.403) 0:00:12.255 ********* 2026-03-10 01:16:55.739809 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-10 01:16:55.739821 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-10 01:16:55.739832 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-10 01:16:55.739843 | orchestrator | 2026-03-10 01:16:55.739854 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-10 01:16:55.739864 | orchestrator | Tuesday 10 March 2026 01:15:02 +0000 (0:00:01.255) 0:00:13.510 ********* 2026-03-10 01:16:55.739875 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:16:55.739886 | orchestrator | 2026-03-10 01:16:55.739897 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-10 01:16:55.739908 | orchestrator | Tuesday 10 March 2026 01:15:02 +0000 (0:00:00.841) 0:00:14.352 ********* 2026-03-10 01:16:55.739934 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-10 01:16:55.739945 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-10 01:16:55.739956 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:16:55.740130 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:16:55.740170 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:16:55.740181 | orchestrator | 2026-03-10 01:16:55.740193 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-10 01:16:55.740203 | orchestrator | Tuesday 10 March 2026 01:15:03 +0000 (0:00:00.770) 0:00:15.123 ********* 2026-03-10 01:16:55.740214 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:16:55.740225 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:16:55.740236 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:16:55.740262 | orchestrator | 2026-03-10 01:16:55.740274 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-10 01:16:55.740285 | orchestrator | Tuesday 10 March 2026 01:15:04 +0000 (0:00:00.618) 0:00:15.741 ********* 2026-03-10 01:16:55.740297 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1101595, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2698224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.740321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1101595, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2698224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.740348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1101595, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2698224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.740359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1101620, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2808225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.740370 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1101620, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2808225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.740391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1101620, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2808225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.740402 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1101650, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2948227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.740412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1101650, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2948227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.740456 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1101650, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2948227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.740468 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1101613, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.27611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.740479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1101613, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.27611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.740495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1101613, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.27611, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.740505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1101651, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2968228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.740515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1101651, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2968228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1101651, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2968228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1101601, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2718225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1101601, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2718225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742682 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1101601, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2718225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742692 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1101631, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2848976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1101631, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2848976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742718 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1101631, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2848976, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1101645, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2918227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1101645, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2918227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1101645, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2918227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1101592, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2678223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1101592, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2678223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742874 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1101592, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2678223, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742910 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1101600, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2718225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1101600, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2718225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742939 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1101600, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2718225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1101617, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.276643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742960 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1101617, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.276643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.742997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1101617, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.276643, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743012 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1101636, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2878227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1101636, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2878227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743046 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1101636, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2878227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1101648, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2938228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1101648, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2938228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743078 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1101648, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2938228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1101608, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2748225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743114 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1101608, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2748225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1101608, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2748225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1101640, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2908227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1101640, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2908227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1101640, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2908227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1101653, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.297823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1101653, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.297823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1101653, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.297823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1101633, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2868226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1101633, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2868226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743258 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1101633, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2868226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743270 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1101629, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2838225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1101629, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2838225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1101629, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2838225, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743322 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1101628, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2828226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743333 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1101628, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2828226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1101628, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2828226, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1101638, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2888227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1101638, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2888227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743813 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1101638, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2888227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743832 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1101627, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2818227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743842 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1101627, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2818227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1101627, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2818227, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1101647, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2928228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743873 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1101647, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2928228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1101647, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2928228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1101605, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2728224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743921 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1101605, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2728224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743931 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1101605, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2728224, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743941 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1101729, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3348236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1101729, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3348236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.743961 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1101729, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3348236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1101670, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3138232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744019 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1101670, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3138232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744029 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1101670, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3138232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1101659, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.300919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1101659, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.300919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1101659, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.300919, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744086 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1101688, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3174725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1101688, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3174725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1101688, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3174725, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1101655, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2988229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1101655, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2988229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744139 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1101655, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2988229, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744164 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1101707, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3278234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1101707, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3278234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1101707, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3278234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1101689, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3238232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744206 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1101689, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3238232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744216 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1101689, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3238232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1101708, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3288233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744257 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1101708, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3288233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1101708, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3288233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744277 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1101725, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.334467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1101725, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.334467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744298 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1101725, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.334467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1101703, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3269708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744334 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1101703, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3269708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1101703, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3269708, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1101682, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3148685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1101682, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3148685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1101682, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3148685, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1101668, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.304823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1101668, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.304823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744427 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1101668, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.304823, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744438 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1101681, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3138232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1101681, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3138232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1101681, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3138232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1101662, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3028228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1101662, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3028228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1101662, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3028228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1101686, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3168232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1101686, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3168232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1101686, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3168232, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744564 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1101719, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3338234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1101719, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3338234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744598 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1101719, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3338234, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1101714, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3313305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744621 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1101714, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3313305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1101714, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3313305, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744649 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1101656, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2994494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1101656, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2994494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1101656, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2994494, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1101657, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2998228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744700 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1101657, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2998228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1101657, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.2998228, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1101701, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3258233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1101701, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3258233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1101701, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3258233, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1101711, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3293922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1101711, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3293922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1101711, 'dev': 87, 'nlink': 1, 'atime': 1773100946.0, 'mtime': 1773100946.0, 'ctime': 1773101976.3293922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-10 01:16:55.744806 | orchestrator | 2026-03-10 01:16:55.744818 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-10 01:16:55.744828 | orchestrator | Tuesday 10 March 2026 01:15:46 +0000 (0:00:41.926) 0:00:57.668 ********* 2026-03-10 01:16:55.744838 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.744853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.744868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-10 01:16:55.744879 | orchestrator | 2026-03-10 01:16:55.744889 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-10 01:16:55.744899 | orchestrator | Tuesday 10 March 2026 01:15:47 +0000 (0:00:00.898) 0:00:58.566 ********* 2026-03-10 01:16:55.744909 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:16:55.744919 | orchestrator | 2026-03-10 01:16:55.744929 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-10 01:16:55.744939 | orchestrator | Tuesday 10 March 2026 01:15:49 +0000 (0:00:02.479) 0:01:01.045 ********* 2026-03-10 01:16:55.744948 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:16:55.744958 | orchestrator | 2026-03-10 01:16:55.745028 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-10 01:16:55.745040 | orchestrator | Tuesday 10 March 2026 01:15:52 +0000 (0:00:02.530) 0:01:03.576 ********* 2026-03-10 01:16:55.745049 | orchestrator | 2026-03-10 01:16:55.745059 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-10 01:16:55.745069 | orchestrator | Tuesday 10 March 2026 01:15:52 +0000 (0:00:00.098) 0:01:03.674 ********* 2026-03-10 01:16:55.745088 | orchestrator | 2026-03-10 01:16:55.745098 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-10 01:16:55.745108 | orchestrator | Tuesday 10 March 2026 01:15:52 +0000 (0:00:00.260) 0:01:03.934 ********* 2026-03-10 01:16:55.745117 | orchestrator | 2026-03-10 01:16:55.745127 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-10 01:16:55.745136 | orchestrator | Tuesday 10 March 2026 01:15:52 +0000 (0:00:00.069) 0:01:04.004 ********* 2026-03-10 01:16:55.745146 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:16:55.745156 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:16:55.745165 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:16:55.745175 | orchestrator | 2026-03-10 01:16:55.745184 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-10 01:16:55.745194 | orchestrator | Tuesday 10 March 2026 01:15:54 +0000 (0:00:01.912) 0:01:05.917 ********* 2026-03-10 01:16:55.745204 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:16:55.745213 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:16:55.745223 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-10 01:16:55.745233 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-10 01:16:55.745243 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:16:55.745252 | orchestrator | 2026-03-10 01:16:55.745274 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-10 01:16:55.745284 | orchestrator | Tuesday 10 March 2026 01:16:21 +0000 (0:00:27.048) 0:01:32.965 ********* 2026-03-10 01:16:55.745302 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:16:55.745312 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:16:55.745321 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:16:55.745331 | orchestrator | 2026-03-10 01:16:55.745340 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-10 01:16:55.745350 | orchestrator | Tuesday 10 March 2026 01:16:47 +0000 (0:00:25.483) 0:01:58.449 ********* 2026-03-10 01:16:55.745359 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:16:55.745369 | orchestrator | 2026-03-10 01:16:55.745378 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-10 01:16:55.745388 | orchestrator | Tuesday 10 March 2026 01:16:49 +0000 (0:00:02.677) 0:02:01.126 ********* 2026-03-10 01:16:55.745398 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:16:55.745407 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:16:55.745417 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:16:55.745426 | orchestrator | 2026-03-10 01:16:55.745435 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-10 01:16:55.745445 | orchestrator | Tuesday 10 March 2026 01:16:50 +0000 (0:00:00.537) 0:02:01.664 ********* 2026-03-10 01:16:55.745457 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-10 01:16:55.745470 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-10 01:16:55.745480 | orchestrator | 2026-03-10 01:16:55.745490 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-10 01:16:55.745500 | orchestrator | Tuesday 10 March 2026 01:16:53 +0000 (0:00:02.957) 0:02:04.622 ********* 2026-03-10 01:16:55.745514 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:16:55.745524 | orchestrator | 2026-03-10 01:16:55.745534 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:16:55.745556 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:16:55.745568 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:16:55.745578 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:16:55.745587 | orchestrator | 2026-03-10 01:16:55.745597 | orchestrator | 2026-03-10 01:16:55.745607 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:16:55.745617 | orchestrator | Tuesday 10 March 2026 01:16:53 +0000 (0:00:00.281) 0:02:04.904 ********* 2026-03-10 01:16:55.745626 | orchestrator | =============================================================================== 2026-03-10 01:16:55.745636 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 41.93s 2026-03-10 01:16:55.745646 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 27.05s 2026-03-10 01:16:55.745655 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 25.48s 2026-03-10 01:16:55.745665 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.96s 2026-03-10 01:16:55.745675 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.68s 2026-03-10 01:16:55.745684 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.53s 2026-03-10 01:16:55.745694 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.48s 2026-03-10 01:16:55.745703 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.91s 2026-03-10 01:16:55.745713 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.55s 2026-03-10 01:16:55.745723 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.47s 2026-03-10 01:16:55.745732 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.40s 2026-03-10 01:16:55.745742 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.29s 2026-03-10 01:16:55.745751 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.26s 2026-03-10 01:16:55.745761 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.99s 2026-03-10 01:16:55.745771 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.97s 2026-03-10 01:16:55.745780 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.90s 2026-03-10 01:16:55.745790 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.89s 2026-03-10 01:16:55.745799 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.87s 2026-03-10 01:16:55.745809 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.84s 2026-03-10 01:16:55.745819 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.77s 2026-03-10 01:16:58.780448 | orchestrator | 2026-03-10 01:16:58 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:16:58.781238 | orchestrator | 2026-03-10 01:16:58 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:16:58.781299 | orchestrator | 2026-03-10 01:16:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:01.818829 | orchestrator | 2026-03-10 01:17:01 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:01.818922 | orchestrator | 2026-03-10 01:17:01 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:01.818933 | orchestrator | 2026-03-10 01:17:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:04.850316 | orchestrator | 2026-03-10 01:17:04 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:04.852021 | orchestrator | 2026-03-10 01:17:04 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:04.852079 | orchestrator | 2026-03-10 01:17:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:07.900600 | orchestrator | 2026-03-10 01:17:07 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:07.900701 | orchestrator | 2026-03-10 01:17:07 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:07.900721 | orchestrator | 2026-03-10 01:17:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:10.950229 | orchestrator | 2026-03-10 01:17:10 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:10.951751 | orchestrator | 2026-03-10 01:17:10 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:10.951840 | orchestrator | 2026-03-10 01:17:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:13.985573 | orchestrator | 2026-03-10 01:17:13 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:13.987303 | orchestrator | 2026-03-10 01:17:13 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:13.987653 | orchestrator | 2026-03-10 01:17:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:17.047688 | orchestrator | 2026-03-10 01:17:17 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:17.049391 | orchestrator | 2026-03-10 01:17:17 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:17.049425 | orchestrator | 2026-03-10 01:17:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:20.090355 | orchestrator | 2026-03-10 01:17:20 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:20.090930 | orchestrator | 2026-03-10 01:17:20 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:20.091010 | orchestrator | 2026-03-10 01:17:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:23.138098 | orchestrator | 2026-03-10 01:17:23 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:23.139090 | orchestrator | 2026-03-10 01:17:23 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:23.139139 | orchestrator | 2026-03-10 01:17:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:26.186292 | orchestrator | 2026-03-10 01:17:26 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:26.188466 | orchestrator | 2026-03-10 01:17:26 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:26.188501 | orchestrator | 2026-03-10 01:17:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:29.235110 | orchestrator | 2026-03-10 01:17:29 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:29.236781 | orchestrator | 2026-03-10 01:17:29 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:29.236870 | orchestrator | 2026-03-10 01:17:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:32.290487 | orchestrator | 2026-03-10 01:17:32 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:32.292144 | orchestrator | 2026-03-10 01:17:32 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:32.292194 | orchestrator | 2026-03-10 01:17:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:35.333415 | orchestrator | 2026-03-10 01:17:35 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:35.334234 | orchestrator | 2026-03-10 01:17:35 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:35.334320 | orchestrator | 2026-03-10 01:17:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:38.381317 | orchestrator | 2026-03-10 01:17:38 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:38.382364 | orchestrator | 2026-03-10 01:17:38 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:38.382428 | orchestrator | 2026-03-10 01:17:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:41.425557 | orchestrator | 2026-03-10 01:17:41 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:41.430396 | orchestrator | 2026-03-10 01:17:41 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:41.430504 | orchestrator | 2026-03-10 01:17:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:44.486750 | orchestrator | 2026-03-10 01:17:44 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:44.486883 | orchestrator | 2026-03-10 01:17:44 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:44.486897 | orchestrator | 2026-03-10 01:17:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:47.562452 | orchestrator | 2026-03-10 01:17:47 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:47.563140 | orchestrator | 2026-03-10 01:17:47 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:47.563179 | orchestrator | 2026-03-10 01:17:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:50.599002 | orchestrator | 2026-03-10 01:17:50 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:50.600569 | orchestrator | 2026-03-10 01:17:50 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:50.600628 | orchestrator | 2026-03-10 01:17:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:53.653385 | orchestrator | 2026-03-10 01:17:53 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:53.653886 | orchestrator | 2026-03-10 01:17:53 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:53.654132 | orchestrator | 2026-03-10 01:17:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:56.688796 | orchestrator | 2026-03-10 01:17:56 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:56.691236 | orchestrator | 2026-03-10 01:17:56 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:56.691280 | orchestrator | 2026-03-10 01:17:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:17:59.729543 | orchestrator | 2026-03-10 01:17:59 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:17:59.730135 | orchestrator | 2026-03-10 01:17:59 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:17:59.730983 | orchestrator | 2026-03-10 01:17:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:02.777680 | orchestrator | 2026-03-10 01:18:02 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:02.778973 | orchestrator | 2026-03-10 01:18:02 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:02.779069 | orchestrator | 2026-03-10 01:18:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:05.817005 | orchestrator | 2026-03-10 01:18:05 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:05.825749 | orchestrator | 2026-03-10 01:18:05 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:05.825827 | orchestrator | 2026-03-10 01:18:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:08.869581 | orchestrator | 2026-03-10 01:18:08 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:08.869696 | orchestrator | 2026-03-10 01:18:08 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:08.869719 | orchestrator | 2026-03-10 01:18:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:11.910649 | orchestrator | 2026-03-10 01:18:11 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:11.912312 | orchestrator | 2026-03-10 01:18:11 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:11.912374 | orchestrator | 2026-03-10 01:18:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:14.960366 | orchestrator | 2026-03-10 01:18:14 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:14.960744 | orchestrator | 2026-03-10 01:18:14 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:14.961167 | orchestrator | 2026-03-10 01:18:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:18.009110 | orchestrator | 2026-03-10 01:18:18 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:18.011931 | orchestrator | 2026-03-10 01:18:18 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:18.012008 | orchestrator | 2026-03-10 01:18:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:21.062377 | orchestrator | 2026-03-10 01:18:21 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:21.062606 | orchestrator | 2026-03-10 01:18:21 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:21.062624 | orchestrator | 2026-03-10 01:18:21 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:24.102691 | orchestrator | 2026-03-10 01:18:24 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:24.103853 | orchestrator | 2026-03-10 01:18:24 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:24.103946 | orchestrator | 2026-03-10 01:18:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:27.150829 | orchestrator | 2026-03-10 01:18:27 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:27.152300 | orchestrator | 2026-03-10 01:18:27 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:27.152351 | orchestrator | 2026-03-10 01:18:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:30.199466 | orchestrator | 2026-03-10 01:18:30 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:30.202256 | orchestrator | 2026-03-10 01:18:30 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:30.202320 | orchestrator | 2026-03-10 01:18:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:33.250319 | orchestrator | 2026-03-10 01:18:33 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:33.252078 | orchestrator | 2026-03-10 01:18:33 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:33.252127 | orchestrator | 2026-03-10 01:18:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:36.298607 | orchestrator | 2026-03-10 01:18:36 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:36.301241 | orchestrator | 2026-03-10 01:18:36 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:36.301517 | orchestrator | 2026-03-10 01:18:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:39.344190 | orchestrator | 2026-03-10 01:18:39 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:39.345220 | orchestrator | 2026-03-10 01:18:39 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:39.345262 | orchestrator | 2026-03-10 01:18:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:42.384588 | orchestrator | 2026-03-10 01:18:42 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:42.385585 | orchestrator | 2026-03-10 01:18:42 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:42.385695 | orchestrator | 2026-03-10 01:18:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:45.424000 | orchestrator | 2026-03-10 01:18:45 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:45.426212 | orchestrator | 2026-03-10 01:18:45 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:45.426273 | orchestrator | 2026-03-10 01:18:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:48.462578 | orchestrator | 2026-03-10 01:18:48 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:48.463535 | orchestrator | 2026-03-10 01:18:48 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:48.463591 | orchestrator | 2026-03-10 01:18:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:51.525541 | orchestrator | 2026-03-10 01:18:51 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:51.525835 | orchestrator | 2026-03-10 01:18:51 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:51.526113 | orchestrator | 2026-03-10 01:18:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:54.568213 | orchestrator | 2026-03-10 01:18:54 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:54.569094 | orchestrator | 2026-03-10 01:18:54 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:54.569133 | orchestrator | 2026-03-10 01:18:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:18:57.611529 | orchestrator | 2026-03-10 01:18:57 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:18:57.613695 | orchestrator | 2026-03-10 01:18:57 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:18:57.613754 | orchestrator | 2026-03-10 01:18:57 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:00.660991 | orchestrator | 2026-03-10 01:19:00 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:00.662362 | orchestrator | 2026-03-10 01:19:00 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:00.662393 | orchestrator | 2026-03-10 01:19:00 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:03.711555 | orchestrator | 2026-03-10 01:19:03 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:03.714841 | orchestrator | 2026-03-10 01:19:03 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:03.714937 | orchestrator | 2026-03-10 01:19:03 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:06.762736 | orchestrator | 2026-03-10 01:19:06 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:06.764042 | orchestrator | 2026-03-10 01:19:06 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:06.764070 | orchestrator | 2026-03-10 01:19:06 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:09.808239 | orchestrator | 2026-03-10 01:19:09 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:09.808817 | orchestrator | 2026-03-10 01:19:09 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:09.808846 | orchestrator | 2026-03-10 01:19:09 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:12.859509 | orchestrator | 2026-03-10 01:19:12 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:12.860113 | orchestrator | 2026-03-10 01:19:12 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:12.860151 | orchestrator | 2026-03-10 01:19:12 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:15.910350 | orchestrator | 2026-03-10 01:19:15 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:15.910948 | orchestrator | 2026-03-10 01:19:15 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:15.910993 | orchestrator | 2026-03-10 01:19:15 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:18.956830 | orchestrator | 2026-03-10 01:19:18 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:18.958277 | orchestrator | 2026-03-10 01:19:18 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:18.958336 | orchestrator | 2026-03-10 01:19:18 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:22.010937 | orchestrator | 2026-03-10 01:19:22 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:22.013520 | orchestrator | 2026-03-10 01:19:22 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:22.013838 | orchestrator | 2026-03-10 01:19:22 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:25.057175 | orchestrator | 2026-03-10 01:19:25 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:25.058363 | orchestrator | 2026-03-10 01:19:25 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:25.058389 | orchestrator | 2026-03-10 01:19:25 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:28.110140 | orchestrator | 2026-03-10 01:19:28 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:28.110876 | orchestrator | 2026-03-10 01:19:28 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:28.110908 | orchestrator | 2026-03-10 01:19:28 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:31.161802 | orchestrator | 2026-03-10 01:19:31 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:31.162647 | orchestrator | 2026-03-10 01:19:31 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:31.162792 | orchestrator | 2026-03-10 01:19:31 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:34.208445 | orchestrator | 2026-03-10 01:19:34 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:34.211239 | orchestrator | 2026-03-10 01:19:34 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:34.211314 | orchestrator | 2026-03-10 01:19:34 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:37.259629 | orchestrator | 2026-03-10 01:19:37 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:37.261167 | orchestrator | 2026-03-10 01:19:37 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:37.261204 | orchestrator | 2026-03-10 01:19:37 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:40.311613 | orchestrator | 2026-03-10 01:19:40 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:40.313172 | orchestrator | 2026-03-10 01:19:40 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:40.313267 | orchestrator | 2026-03-10 01:19:40 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:43.360794 | orchestrator | 2026-03-10 01:19:43 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:43.361310 | orchestrator | 2026-03-10 01:19:43 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:43.361334 | orchestrator | 2026-03-10 01:19:43 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:46.414250 | orchestrator | 2026-03-10 01:19:46 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:46.417269 | orchestrator | 2026-03-10 01:19:46 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:46.417338 | orchestrator | 2026-03-10 01:19:46 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:49.454183 | orchestrator | 2026-03-10 01:19:49 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:49.455543 | orchestrator | 2026-03-10 01:19:49 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:49.455600 | orchestrator | 2026-03-10 01:19:49 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:52.492581 | orchestrator | 2026-03-10 01:19:52 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:52.493434 | orchestrator | 2026-03-10 01:19:52 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:52.493569 | orchestrator | 2026-03-10 01:19:52 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:55.523799 | orchestrator | 2026-03-10 01:19:55 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:55.526202 | orchestrator | 2026-03-10 01:19:55 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:55.526258 | orchestrator | 2026-03-10 01:19:55 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:19:58.556907 | orchestrator | 2026-03-10 01:19:58 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:19:58.559043 | orchestrator | 2026-03-10 01:19:58 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:19:58.559252 | orchestrator | 2026-03-10 01:19:58 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:01.609763 | orchestrator | 2026-03-10 01:20:01 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:01.610589 | orchestrator | 2026-03-10 01:20:01 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:01.610628 | orchestrator | 2026-03-10 01:20:01 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:04.660363 | orchestrator | 2026-03-10 01:20:04 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:04.660980 | orchestrator | 2026-03-10 01:20:04 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:04.661079 | orchestrator | 2026-03-10 01:20:04 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:07.702306 | orchestrator | 2026-03-10 01:20:07 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:07.705347 | orchestrator | 2026-03-10 01:20:07 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:07.705472 | orchestrator | 2026-03-10 01:20:07 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:10.745948 | orchestrator | 2026-03-10 01:20:10 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:10.748661 | orchestrator | 2026-03-10 01:20:10 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:10.748769 | orchestrator | 2026-03-10 01:20:10 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:13.804678 | orchestrator | 2026-03-10 01:20:13 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:13.807508 | orchestrator | 2026-03-10 01:20:13 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:13.807523 | orchestrator | 2026-03-10 01:20:13 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:16.843091 | orchestrator | 2026-03-10 01:20:16 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:16.845360 | orchestrator | 2026-03-10 01:20:16 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:16.845407 | orchestrator | 2026-03-10 01:20:16 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:19.959111 | orchestrator | 2026-03-10 01:20:19 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:19.959764 | orchestrator | 2026-03-10 01:20:19 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:19.959776 | orchestrator | 2026-03-10 01:20:19 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:23.003554 | orchestrator | 2026-03-10 01:20:23 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:23.004604 | orchestrator | 2026-03-10 01:20:23 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:23.004664 | orchestrator | 2026-03-10 01:20:23 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:26.050273 | orchestrator | 2026-03-10 01:20:26 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:26.050347 | orchestrator | 2026-03-10 01:20:26 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:26.050353 | orchestrator | 2026-03-10 01:20:26 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:29.084298 | orchestrator | 2026-03-10 01:20:29 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:29.085838 | orchestrator | 2026-03-10 01:20:29 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:29.085914 | orchestrator | 2026-03-10 01:20:29 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:32.120934 | orchestrator | 2026-03-10 01:20:32 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:32.123320 | orchestrator | 2026-03-10 01:20:32 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:32.123371 | orchestrator | 2026-03-10 01:20:32 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:35.175930 | orchestrator | 2026-03-10 01:20:35 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:35.178542 | orchestrator | 2026-03-10 01:20:35 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:35.178613 | orchestrator | 2026-03-10 01:20:35 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:38.223492 | orchestrator | 2026-03-10 01:20:38 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:38.225704 | orchestrator | 2026-03-10 01:20:38 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:38.225745 | orchestrator | 2026-03-10 01:20:38 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:41.268254 | orchestrator | 2026-03-10 01:20:41 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:41.268456 | orchestrator | 2026-03-10 01:20:41 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:41.268653 | orchestrator | 2026-03-10 01:20:41 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:44.410685 | orchestrator | 2026-03-10 01:20:44 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:44.415289 | orchestrator | 2026-03-10 01:20:44 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:44.415352 | orchestrator | 2026-03-10 01:20:44 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:47.445256 | orchestrator | 2026-03-10 01:20:47 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:47.445325 | orchestrator | 2026-03-10 01:20:47 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:47.445332 | orchestrator | 2026-03-10 01:20:47 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:50.525600 | orchestrator | 2026-03-10 01:20:50 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:50.525682 | orchestrator | 2026-03-10 01:20:50 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:50.525690 | orchestrator | 2026-03-10 01:20:50 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:53.539588 | orchestrator | 2026-03-10 01:20:53 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:53.541581 | orchestrator | 2026-03-10 01:20:53 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:53.541941 | orchestrator | 2026-03-10 01:20:53 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:56.586005 | orchestrator | 2026-03-10 01:20:56 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:56.586634 | orchestrator | 2026-03-10 01:20:56 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:56.586670 | orchestrator | 2026-03-10 01:20:56 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:20:59.636085 | orchestrator | 2026-03-10 01:20:59 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:20:59.636931 | orchestrator | 2026-03-10 01:20:59 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:20:59.637015 | orchestrator | 2026-03-10 01:20:59 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:02.683848 | orchestrator | 2026-03-10 01:21:02 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:02.685453 | orchestrator | 2026-03-10 01:21:02 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:21:02.685507 | orchestrator | 2026-03-10 01:21:02 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:05.732964 | orchestrator | 2026-03-10 01:21:05 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:05.735872 | orchestrator | 2026-03-10 01:21:05 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:21:05.735940 | orchestrator | 2026-03-10 01:21:05 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:08.781566 | orchestrator | 2026-03-10 01:21:08 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:08.783600 | orchestrator | 2026-03-10 01:21:08 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:21:08.783667 | orchestrator | 2026-03-10 01:21:08 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:11.844295 | orchestrator | 2026-03-10 01:21:11 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:11.846873 | orchestrator | 2026-03-10 01:21:11 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:21:11.846939 | orchestrator | 2026-03-10 01:21:11 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:14.897242 | orchestrator | 2026-03-10 01:21:14 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:14.900915 | orchestrator | 2026-03-10 01:21:14 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:21:14.900995 | orchestrator | 2026-03-10 01:21:14 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:17.946938 | orchestrator | 2026-03-10 01:21:17 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:17.948321 | orchestrator | 2026-03-10 01:21:17 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:21:17.948355 | orchestrator | 2026-03-10 01:21:17 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:20.989236 | orchestrator | 2026-03-10 01:21:20 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:20.990435 | orchestrator | 2026-03-10 01:21:20 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:21:20.990474 | orchestrator | 2026-03-10 01:21:20 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:24.040702 | orchestrator | 2026-03-10 01:21:24 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:24.041710 | orchestrator | 2026-03-10 01:21:24 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:21:24.041835 | orchestrator | 2026-03-10 01:21:24 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:27.087348 | orchestrator | 2026-03-10 01:21:27 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:27.089130 | orchestrator | 2026-03-10 01:21:27 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:21:27.089197 | orchestrator | 2026-03-10 01:21:27 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:30.137687 | orchestrator | 2026-03-10 01:21:30 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:30.138662 | orchestrator | 2026-03-10 01:21:30 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:21:30.138785 | orchestrator | 2026-03-10 01:21:30 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:33.177446 | orchestrator | 2026-03-10 01:21:33 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:33.179717 | orchestrator | 2026-03-10 01:21:33 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state STARTED 2026-03-10 01:21:33.179810 | orchestrator | 2026-03-10 01:21:33 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:36.228824 | orchestrator | 2026-03-10 01:21:36 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:36.233023 | orchestrator | 2026-03-10 01:21:36 | INFO  | Task 791f351b-2162-408f-ab5a-21607914873b is in state SUCCESS 2026-03-10 01:21:36.235984 | orchestrator | 2026-03-10 01:21:36.236143 | orchestrator | 2026-03-10 01:21:36.236158 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:21:36.236170 | orchestrator | 2026-03-10 01:21:36.236210 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-10 01:21:36.236222 | orchestrator | Tuesday 10 March 2026 01:12:16 +0000 (0:00:00.425) 0:00:00.425 ********* 2026-03-10 01:21:36.236233 | orchestrator | changed: [testbed-manager] 2026-03-10 01:21:36.236246 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.236257 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:36.236268 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:36.236279 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:21:36.236289 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:21:36.236300 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:21:36.236311 | orchestrator | 2026-03-10 01:21:36.236322 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:21:36.236333 | orchestrator | Tuesday 10 March 2026 01:12:17 +0000 (0:00:01.014) 0:00:01.440 ********* 2026-03-10 01:21:36.236344 | orchestrator | changed: [testbed-manager] 2026-03-10 01:21:36.236367 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.236379 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:36.236397 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:36.236415 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:21:36.236432 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:21:36.236450 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:21:36.236469 | orchestrator | 2026-03-10 01:21:36.236502 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:21:36.236519 | orchestrator | Tuesday 10 March 2026 01:12:18 +0000 (0:00:00.779) 0:00:02.220 ********* 2026-03-10 01:21:36.236531 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-10 01:21:36.236579 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-10 01:21:36.236593 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-10 01:21:36.236606 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-10 01:21:36.236618 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-10 01:21:36.236631 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-10 01:21:36.236644 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-10 01:21:36.236657 | orchestrator | 2026-03-10 01:21:36.236669 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-10 01:21:36.236682 | orchestrator | 2026-03-10 01:21:36.236695 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-10 01:21:36.236707 | orchestrator | Tuesday 10 March 2026 01:12:19 +0000 (0:00:01.035) 0:00:03.255 ********* 2026-03-10 01:21:36.236737 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:21:36.236749 | orchestrator | 2026-03-10 01:21:36.236785 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-10 01:21:36.236853 | orchestrator | Tuesday 10 March 2026 01:12:19 +0000 (0:00:00.839) 0:00:04.095 ********* 2026-03-10 01:21:36.236868 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-10 01:21:36.236960 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-10 01:21:36.236972 | orchestrator | 2026-03-10 01:21:36.236983 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-10 01:21:36.236994 | orchestrator | Tuesday 10 March 2026 01:12:24 +0000 (0:00:04.227) 0:00:08.322 ********* 2026-03-10 01:21:36.237005 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 01:21:36.237042 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-10 01:21:36.237054 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.237065 | orchestrator | 2026-03-10 01:21:36.237075 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-10 01:21:36.237086 | orchestrator | Tuesday 10 March 2026 01:12:28 +0000 (0:00:04.689) 0:00:13.012 ********* 2026-03-10 01:21:36.237097 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.237108 | orchestrator | 2026-03-10 01:21:36.237118 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-10 01:21:36.237129 | orchestrator | Tuesday 10 March 2026 01:12:29 +0000 (0:00:00.958) 0:00:13.970 ********* 2026-03-10 01:21:36.237139 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.237150 | orchestrator | 2026-03-10 01:21:36.237161 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-10 01:21:36.237172 | orchestrator | Tuesday 10 March 2026 01:12:32 +0000 (0:00:02.194) 0:00:16.164 ********* 2026-03-10 01:21:36.237182 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.237193 | orchestrator | 2026-03-10 01:21:36.237204 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-10 01:21:36.237214 | orchestrator | Tuesday 10 March 2026 01:12:36 +0000 (0:00:04.192) 0:00:20.357 ********* 2026-03-10 01:21:36.237225 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.237235 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.237247 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.237258 | orchestrator | 2026-03-10 01:21:36.237268 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-10 01:21:36.237279 | orchestrator | Tuesday 10 March 2026 01:12:36 +0000 (0:00:00.452) 0:00:20.810 ********* 2026-03-10 01:21:36.237290 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:36.237301 | orchestrator | 2026-03-10 01:21:36.237312 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-10 01:21:36.237336 | orchestrator | Tuesday 10 March 2026 01:13:10 +0000 (0:00:33.401) 0:00:54.211 ********* 2026-03-10 01:21:36.237347 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.237358 | orchestrator | 2026-03-10 01:21:36.237369 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-10 01:21:36.237379 | orchestrator | Tuesday 10 March 2026 01:13:27 +0000 (0:00:16.952) 0:01:11.164 ********* 2026-03-10 01:21:36.237390 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:36.237400 | orchestrator | 2026-03-10 01:21:36.237411 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-10 01:21:36.237422 | orchestrator | Tuesday 10 March 2026 01:13:41 +0000 (0:00:14.090) 0:01:25.254 ********* 2026-03-10 01:21:36.237449 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:36.237461 | orchestrator | 2026-03-10 01:21:36.237472 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-10 01:21:36.237482 | orchestrator | Tuesday 10 March 2026 01:13:42 +0000 (0:00:01.335) 0:01:26.589 ********* 2026-03-10 01:21:36.237493 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.237503 | orchestrator | 2026-03-10 01:21:36.237514 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-10 01:21:36.237525 | orchestrator | Tuesday 10 March 2026 01:13:42 +0000 (0:00:00.488) 0:01:27.078 ********* 2026-03-10 01:21:36.237535 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:21:36.237555 | orchestrator | 2026-03-10 01:21:36.237566 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-10 01:21:36.237576 | orchestrator | Tuesday 10 March 2026 01:13:43 +0000 (0:00:00.538) 0:01:27.617 ********* 2026-03-10 01:21:36.237587 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:36.237597 | orchestrator | 2026-03-10 01:21:36.237608 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-10 01:21:36.237619 | orchestrator | Tuesday 10 March 2026 01:14:03 +0000 (0:00:20.443) 0:01:48.060 ********* 2026-03-10 01:21:36.237629 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.237640 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.237650 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.237661 | orchestrator | 2026-03-10 01:21:36.237671 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-10 01:21:36.237682 | orchestrator | 2026-03-10 01:21:36.237692 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-10 01:21:36.237715 | orchestrator | Tuesday 10 March 2026 01:14:04 +0000 (0:00:00.426) 0:01:48.486 ********* 2026-03-10 01:21:36.237727 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:21:36.237738 | orchestrator | 2026-03-10 01:21:36.237748 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-10 01:21:36.237803 | orchestrator | Tuesday 10 March 2026 01:14:04 +0000 (0:00:00.616) 0:01:49.103 ********* 2026-03-10 01:21:36.237815 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.237826 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.237836 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.237847 | orchestrator | 2026-03-10 01:21:36.237858 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-10 01:21:36.237869 | orchestrator | Tuesday 10 March 2026 01:14:07 +0000 (0:00:02.310) 0:01:51.414 ********* 2026-03-10 01:21:36.237879 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.237890 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.237900 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.237911 | orchestrator | 2026-03-10 01:21:36.237922 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-10 01:21:36.237933 | orchestrator | Tuesday 10 March 2026 01:14:09 +0000 (0:00:02.340) 0:01:53.754 ********* 2026-03-10 01:21:36.237943 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.237954 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.237964 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.237975 | orchestrator | 2026-03-10 01:21:36.237985 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-10 01:21:36.237996 | orchestrator | Tuesday 10 March 2026 01:14:10 +0000 (0:00:00.432) 0:01:54.187 ********* 2026-03-10 01:21:36.238007 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-10 01:21:36.238077 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-10 01:21:36.238091 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.238102 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.238113 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-10 01:21:36.238124 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-10 01:21:36.238135 | orchestrator | 2026-03-10 01:21:36.238145 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-10 01:21:36.238156 | orchestrator | Tuesday 10 March 2026 01:14:19 +0000 (0:00:09.441) 0:02:03.628 ********* 2026-03-10 01:21:36.238166 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.238177 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.238188 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.238198 | orchestrator | 2026-03-10 01:21:36.238221 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-10 01:21:36.238232 | orchestrator | Tuesday 10 March 2026 01:14:20 +0000 (0:00:00.623) 0:02:04.252 ********* 2026-03-10 01:21:36.238251 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-10 01:21:36.238262 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.238273 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-10 01:21:36.238283 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.238294 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-10 01:21:36.238305 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.238316 | orchestrator | 2026-03-10 01:21:36.238326 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-10 01:21:36.238338 | orchestrator | Tuesday 10 March 2026 01:14:21 +0000 (0:00:01.168) 0:02:05.421 ********* 2026-03-10 01:21:36.238348 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.238359 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.238370 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.238381 | orchestrator | 2026-03-10 01:21:36.238454 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-10 01:21:36.238465 | orchestrator | Tuesday 10 March 2026 01:14:22 +0000 (0:00:01.038) 0:02:06.459 ********* 2026-03-10 01:21:36.238476 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.238487 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.238498 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.238508 | orchestrator | 2026-03-10 01:21:36.238519 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-10 01:21:36.238530 | orchestrator | Tuesday 10 March 2026 01:14:23 +0000 (0:00:01.059) 0:02:07.518 ********* 2026-03-10 01:21:36.238540 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.238551 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.238571 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.238583 | orchestrator | 2026-03-10 01:21:36.238593 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-10 01:21:36.238604 | orchestrator | Tuesday 10 March 2026 01:14:26 +0000 (0:00:02.818) 0:02:10.337 ********* 2026-03-10 01:21:36.238615 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.238626 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.238637 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:36.238647 | orchestrator | 2026-03-10 01:21:36.238659 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-10 01:21:36.238670 | orchestrator | Tuesday 10 March 2026 01:14:52 +0000 (0:00:26.182) 0:02:36.520 ********* 2026-03-10 01:21:36.238680 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.238691 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.238702 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:36.238712 | orchestrator | 2026-03-10 01:21:36.238723 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-10 01:21:36.238734 | orchestrator | Tuesday 10 March 2026 01:15:06 +0000 (0:00:14.250) 0:02:50.771 ********* 2026-03-10 01:21:36.238745 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:36.238755 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.238824 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.238835 | orchestrator | 2026-03-10 01:21:36.238846 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-10 01:21:36.238857 | orchestrator | Tuesday 10 March 2026 01:15:07 +0000 (0:00:01.109) 0:02:51.880 ********* 2026-03-10 01:21:36.238868 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.238878 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.238889 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.238900 | orchestrator | 2026-03-10 01:21:36.238911 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-10 01:21:36.238922 | orchestrator | Tuesday 10 March 2026 01:15:22 +0000 (0:00:15.000) 0:03:06.881 ********* 2026-03-10 01:21:36.238933 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.238943 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.238954 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.238965 | orchestrator | 2026-03-10 01:21:36.238976 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-10 01:21:36.238992 | orchestrator | Tuesday 10 March 2026 01:15:24 +0000 (0:00:01.314) 0:03:08.195 ********* 2026-03-10 01:21:36.239003 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.239014 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.239024 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.239035 | orchestrator | 2026-03-10 01:21:36.239046 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-10 01:21:36.239057 | orchestrator | 2026-03-10 01:21:36.239067 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-10 01:21:36.239078 | orchestrator | Tuesday 10 March 2026 01:15:24 +0000 (0:00:00.583) 0:03:08.779 ********* 2026-03-10 01:21:36.239089 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:21:36.239100 | orchestrator | 2026-03-10 01:21:36.239111 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-10 01:21:36.239122 | orchestrator | Tuesday 10 March 2026 01:15:25 +0000 (0:00:00.598) 0:03:09.377 ********* 2026-03-10 01:21:36.239133 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-10 01:21:36.239144 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-10 01:21:36.239154 | orchestrator | 2026-03-10 01:21:36.239242 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-10 01:21:36.239254 | orchestrator | Tuesday 10 March 2026 01:15:29 +0000 (0:00:04.089) 0:03:13.467 ********* 2026-03-10 01:21:36.239265 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-10 01:21:36.239278 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-10 01:21:36.239289 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-10 01:21:36.239299 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-10 01:21:36.239309 | orchestrator | 2026-03-10 01:21:36.239318 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-10 01:21:36.239328 | orchestrator | Tuesday 10 March 2026 01:15:36 +0000 (0:00:07.349) 0:03:20.816 ********* 2026-03-10 01:21:36.239337 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:21:36.239347 | orchestrator | 2026-03-10 01:21:36.239356 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-10 01:21:36.239366 | orchestrator | Tuesday 10 March 2026 01:15:40 +0000 (0:00:03.671) 0:03:24.488 ********* 2026-03-10 01:21:36.239375 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-10 01:21:36.239385 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:21:36.239394 | orchestrator | 2026-03-10 01:21:36.239404 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-10 01:21:36.239420 | orchestrator | Tuesday 10 March 2026 01:15:44 +0000 (0:00:03.801) 0:03:28.289 ********* 2026-03-10 01:21:36.239430 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:21:36.239439 | orchestrator | 2026-03-10 01:21:36.239449 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-10 01:21:36.239459 | orchestrator | Tuesday 10 March 2026 01:15:47 +0000 (0:00:02.956) 0:03:31.246 ********* 2026-03-10 01:21:36.239468 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-10 01:21:36.239478 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-10 01:21:36.239487 | orchestrator | 2026-03-10 01:21:36.239497 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-10 01:21:36.239514 | orchestrator | Tuesday 10 March 2026 01:15:54 +0000 (0:00:07.751) 0:03:38.997 ********* 2026-03-10 01:21:36.239558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.239584 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.239601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.239622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.239655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.239667 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.239677 | orchestrator | 2026-03-10 01:21:36.239687 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-10 01:21:36.239697 | orchestrator | Tuesday 10 March 2026 01:15:56 +0000 (0:00:01.492) 0:03:40.490 ********* 2026-03-10 01:21:36.239707 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.239717 | orchestrator | 2026-03-10 01:21:36.239726 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-10 01:21:36.239736 | orchestrator | Tuesday 10 March 2026 01:15:56 +0000 (0:00:00.156) 0:03:40.647 ********* 2026-03-10 01:21:36.239746 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.239755 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.239814 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.239824 | orchestrator | 2026-03-10 01:21:36.239834 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-10 01:21:36.239844 | orchestrator | Tuesday 10 March 2026 01:15:57 +0000 (0:00:00.542) 0:03:41.190 ********* 2026-03-10 01:21:36.239853 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-10 01:21:36.239863 | orchestrator | 2026-03-10 01:21:36.239872 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-10 01:21:36.239882 | orchestrator | Tuesday 10 March 2026 01:15:57 +0000 (0:00:00.778) 0:03:41.968 ********* 2026-03-10 01:21:36.239891 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.239901 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.239910 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.239919 | orchestrator | 2026-03-10 01:21:36.239929 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-10 01:21:36.239938 | orchestrator | Tuesday 10 March 2026 01:15:58 +0000 (0:00:00.330) 0:03:42.299 ********* 2026-03-10 01:21:36.239948 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:21:36.239958 | orchestrator | 2026-03-10 01:21:36.239967 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-10 01:21:36.239977 | orchestrator | Tuesday 10 March 2026 01:15:58 +0000 (0:00:00.576) 0:03:42.875 ********* 2026-03-10 01:21:36.239993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.240021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.240034 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.240045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.240056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.240083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.240093 | orchestrator | 2026-03-10 01:21:36.240103 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-10 01:21:36.240113 | orchestrator | Tuesday 10 March 2026 01:16:01 +0000 (0:00:02.668) 0:03:45.544 ********* 2026-03-10 01:21:36.240124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:21:36.240134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.240145 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.240156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:21:36.240183 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.240194 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.240224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:21:36.240235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.240246 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.240256 | orchestrator | 2026-03-10 01:21:36.240265 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-10 01:21:36.240275 | orchestrator | Tuesday 10 March 2026 01:16:02 +0000 (0:00:00.667) 0:03:46.212 ********* 2026-03-10 01:21:36.240285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:21:36.240306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.240317 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.240337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:21:36.240348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.240358 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.240368 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:21:36.240384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.240394 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.240404 | orchestrator | 2026-03-10 01:21:36.240413 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-10 01:21:36.240423 | orchestrator | Tuesday 10 March 2026 01:16:02 +0000 (0:00:00.845) 0:03:47.057 ********* 2026-03-10 01:21:36.240445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.240457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.240468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.240490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.240508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.240518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.240528 | orchestrator | 2026-03-10 01:21:36.240538 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-10 01:21:36.240548 | orchestrator | Tuesday 10 March 2026 01:16:05 +0000 (0:00:02.665) 0:03:49.723 ********* 2026-03-10 01:21:36.240558 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.240596 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.240620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.240631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.240642 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.240652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.240669 | orchestrator | 2026-03-10 01:21:36.240679 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-10 01:21:36.240689 | orchestrator | Tuesday 10 March 2026 01:16:11 +0000 (0:00:05.911) 0:03:55.635 ********* 2026-03-10 01:21:36.240699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:21:36.240719 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.240730 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.240740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:21:36.240751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.240786 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.240797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-10 01:21:36.240812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.240822 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.240832 | orchestrator | 2026-03-10 01:21:36.240842 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-10 01:21:36.240852 | orchestrator | Tuesday 10 March 2026 01:16:12 +0000 (0:00:00.654) 0:03:56.289 ********* 2026-03-10 01:21:36.240861 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.240870 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:36.240880 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:36.240889 | orchestrator | 2026-03-10 01:21:36.240904 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-10 01:21:36.240915 | orchestrator | Tuesday 10 March 2026 01:16:13 +0000 (0:00:01.607) 0:03:57.897 ********* 2026-03-10 01:21:36.240924 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.240934 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.240943 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.240953 | orchestrator | 2026-03-10 01:21:36.240963 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-10 01:21:36.240972 | orchestrator | Tuesday 10 March 2026 01:16:14 +0000 (0:00:00.374) 0:03:58.272 ********* 2026-03-10 01:21:36.240982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.241000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.241025 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:36.241037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.241047 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.241066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.241076 | orchestrator | 2026-03-10 01:21:36.241085 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-10 01:21:36.241095 | orchestrator | Tuesday 10 March 2026 01:16:16 +0000 (0:00:02.255) 0:04:00.527 ********* 2026-03-10 01:21:36.241105 | orchestrator | 2026-03-10 01:21:36.241114 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-10 01:21:36.241124 | orchestrator | Tuesday 10 March 2026 01:16:16 +0000 (0:00:00.142) 0:04:00.670 ********* 2026-03-10 01:21:36.241133 | orchestrator | 2026-03-10 01:21:36.241143 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-10 01:21:36.241152 | orchestrator | Tuesday 10 March 2026 01:16:16 +0000 (0:00:00.160) 0:04:00.830 ********* 2026-03-10 01:21:36.241162 | orchestrator | 2026-03-10 01:21:36.241171 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-10 01:21:36.241181 | orchestrator | Tuesday 10 March 2026 01:16:16 +0000 (0:00:00.135) 0:04:00.965 ********* 2026-03-10 01:21:36.241190 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.241200 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:36.241209 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:36.241218 | orchestrator | 2026-03-10 01:21:36.241228 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-10 01:21:36.241238 | orchestrator | Tuesday 10 March 2026 01:16:34 +0000 (0:00:17.283) 0:04:18.249 ********* 2026-03-10 01:21:36.241247 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:36.241256 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:36.241266 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.241275 | orchestrator | 2026-03-10 01:21:36.241285 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-10 01:21:36.241294 | orchestrator | 2026-03-10 01:21:36.241304 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-10 01:21:36.241313 | orchestrator | Tuesday 10 March 2026 01:16:43 +0000 (0:00:08.921) 0:04:27.171 ********* 2026-03-10 01:21:36.241323 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:21:36.241333 | orchestrator | 2026-03-10 01:21:36.241342 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-10 01:21:36.241352 | orchestrator | Tuesday 10 March 2026 01:16:44 +0000 (0:00:01.552) 0:04:28.723 ********* 2026-03-10 01:21:36.241361 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.241370 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.241380 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.241389 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.241398 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.241408 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.241417 | orchestrator | 2026-03-10 01:21:36.241431 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-10 01:21:36.241441 | orchestrator | Tuesday 10 March 2026 01:16:45 +0000 (0:00:00.671) 0:04:29.394 ********* 2026-03-10 01:21:36.241450 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.241460 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.241475 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.241484 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:21:36.241494 | orchestrator | 2026-03-10 01:21:36.241503 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-10 01:21:36.241518 | orchestrator | Tuesday 10 March 2026 01:16:46 +0000 (0:00:01.186) 0:04:30.581 ********* 2026-03-10 01:21:36.241529 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-10 01:21:36.241538 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-10 01:21:36.241548 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-10 01:21:36.241557 | orchestrator | 2026-03-10 01:21:36.241567 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-10 01:21:36.241577 | orchestrator | Tuesday 10 March 2026 01:16:47 +0000 (0:00:00.695) 0:04:31.277 ********* 2026-03-10 01:21:36.241586 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-10 01:21:36.241596 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-10 01:21:36.241605 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-10 01:21:36.241615 | orchestrator | 2026-03-10 01:21:36.241624 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-10 01:21:36.241634 | orchestrator | Tuesday 10 March 2026 01:16:48 +0000 (0:00:01.588) 0:04:32.866 ********* 2026-03-10 01:21:36.241644 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-10 01:21:36.241653 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.241662 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-10 01:21:36.241672 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.241681 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-10 01:21:36.241690 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.241700 | orchestrator | 2026-03-10 01:21:36.241709 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-10 01:21:36.241719 | orchestrator | Tuesday 10 March 2026 01:16:49 +0000 (0:00:00.603) 0:04:33.469 ********* 2026-03-10 01:21:36.241728 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 01:21:36.241738 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 01:21:36.241747 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.241775 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 01:21:36.241786 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 01:21:36.241795 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.241805 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-10 01:21:36.241814 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-10 01:21:36.241826 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.241841 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-10 01:21:36.241858 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-10 01:21:36.241874 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-10 01:21:36.241888 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-10 01:21:36.241902 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-10 01:21:36.241915 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-10 01:21:36.241930 | orchestrator | 2026-03-10 01:21:36.241945 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-10 01:21:36.241960 | orchestrator | Tuesday 10 March 2026 01:16:50 +0000 (0:00:01.374) 0:04:34.844 ********* 2026-03-10 01:21:36.241976 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.241992 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.242049 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.242062 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:21:36.242071 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:21:36.242080 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:21:36.242090 | orchestrator | 2026-03-10 01:21:36.242100 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-10 01:21:36.242109 | orchestrator | Tuesday 10 March 2026 01:16:52 +0000 (0:00:01.400) 0:04:36.244 ********* 2026-03-10 01:21:36.242118 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.242128 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.242137 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.242146 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:21:36.242155 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:21:36.242165 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:21:36.242174 | orchestrator | 2026-03-10 01:21:36.242184 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-10 01:21:36.242193 | orchestrator | Tuesday 10 March 2026 01:16:54 +0000 (0:00:02.038) 0:04:38.283 ********* 2026-03-10 01:21:36.242209 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242259 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242270 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242287 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242312 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242339 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242350 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242371 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242412 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242423 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242432 | orchestrator | 2026-03-10 01:21:36.242442 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-10 01:21:36.242452 | orchestrator | Tuesday 10 March 2026 01:16:56 +0000 (0:00:02.278) 0:04:40.562 ********* 2026-03-10 01:21:36.242463 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:21:36.242474 | orchestrator | 2026-03-10 01:21:36.242490 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-10 01:21:36.242499 | orchestrator | Tuesday 10 March 2026 01:16:57 +0000 (0:00:01.330) 0:04:41.892 ********* 2026-03-10 01:21:36.242509 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242520 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242541 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242552 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242578 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242589 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242599 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242613 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242639 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242666 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242686 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.242696 | orchestrator | 2026-03-10 01:21:36.242714 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-10 01:21:36.242724 | orchestrator | Tuesday 10 March 2026 01:17:01 +0000 (0:00:04.081) 0:04:45.973 ********* 2026-03-10 01:21:36.242740 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:21:36.242751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:21:36.242820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.242832 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.242842 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:21:36.242853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:21:36.242874 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.242885 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.242895 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:21:36.242911 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:21:36.242922 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.242932 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.242942 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:21:36.242952 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.242962 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.243072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:21:36.243086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.243103 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.243113 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:21:36.243123 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.243133 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.243143 | orchestrator | 2026-03-10 01:21:36.243153 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-10 01:21:36.243162 | orchestrator | Tuesday 10 March 2026 01:17:03 +0000 (0:00:01.628) 0:04:47.602 ********* 2026-03-10 01:21:36.243170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:21:36.243178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:21:36.243195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.243208 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.243217 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:21:36.243225 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:21:36.243234 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.243242 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.243250 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:21:36.243262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:21:36.243280 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.243289 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.243297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:21:36.243306 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.243314 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.243322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:21:36.243330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.243338 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.243351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:21:36.243386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.243396 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.243404 | orchestrator | 2026-03-10 01:21:36.243413 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-10 01:21:36.243421 | orchestrator | Tuesday 10 March 2026 01:17:06 +0000 (0:00:02.572) 0:04:50.174 ********* 2026-03-10 01:21:36.243429 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.243436 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.243445 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.243453 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-10 01:21:36.243461 | orchestrator | 2026-03-10 01:21:36.243469 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-10 01:21:36.243477 | orchestrator | Tuesday 10 March 2026 01:17:07 +0000 (0:00:01.172) 0:04:51.346 ********* 2026-03-10 01:21:36.243484 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-10 01:21:36.243492 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-10 01:21:36.243500 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-10 01:21:36.243508 | orchestrator | 2026-03-10 01:21:36.243516 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-10 01:21:36.243524 | orchestrator | Tuesday 10 March 2026 01:17:08 +0000 (0:00:01.100) 0:04:52.447 ********* 2026-03-10 01:21:36.243532 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-10 01:21:36.243540 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-10 01:21:36.243547 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-10 01:21:36.243555 | orchestrator | 2026-03-10 01:21:36.243563 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-10 01:21:36.243571 | orchestrator | Tuesday 10 March 2026 01:17:09 +0000 (0:00:01.039) 0:04:53.487 ********* 2026-03-10 01:21:36.243579 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:21:36.243587 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:21:36.243595 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:21:36.243603 | orchestrator | 2026-03-10 01:21:36.243611 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-10 01:21:36.243619 | orchestrator | Tuesday 10 March 2026 01:17:09 +0000 (0:00:00.556) 0:04:54.044 ********* 2026-03-10 01:21:36.243627 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:21:36.243635 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:21:36.243643 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:21:36.243651 | orchestrator | 2026-03-10 01:21:36.243658 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-10 01:21:36.243666 | orchestrator | Tuesday 10 March 2026 01:17:10 +0000 (0:00:00.898) 0:04:54.943 ********* 2026-03-10 01:21:36.243674 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-10 01:21:36.243683 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-10 01:21:36.243690 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-10 01:21:36.243699 | orchestrator | 2026-03-10 01:21:36.243707 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-10 01:21:36.243714 | orchestrator | Tuesday 10 March 2026 01:17:12 +0000 (0:00:01.335) 0:04:56.278 ********* 2026-03-10 01:21:36.243722 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-10 01:21:36.243730 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-10 01:21:36.243738 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-10 01:21:36.243752 | orchestrator | 2026-03-10 01:21:36.243773 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-10 01:21:36.243782 | orchestrator | Tuesday 10 March 2026 01:17:13 +0000 (0:00:01.285) 0:04:57.563 ********* 2026-03-10 01:21:36.243789 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-10 01:21:36.243797 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-10 01:21:36.243805 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-10 01:21:36.243813 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-10 01:21:36.243821 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-10 01:21:36.243828 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-10 01:21:36.243836 | orchestrator | 2026-03-10 01:21:36.243844 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-10 01:21:36.243852 | orchestrator | Tuesday 10 March 2026 01:17:17 +0000 (0:00:03.942) 0:05:01.506 ********* 2026-03-10 01:21:36.243860 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.243867 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.243875 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.243883 | orchestrator | 2026-03-10 01:21:36.243891 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-10 01:21:36.243899 | orchestrator | Tuesday 10 March 2026 01:17:18 +0000 (0:00:00.655) 0:05:02.162 ********* 2026-03-10 01:21:36.243906 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.243914 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.243923 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.243930 | orchestrator | 2026-03-10 01:21:36.243938 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-10 01:21:36.243950 | orchestrator | Tuesday 10 March 2026 01:17:18 +0000 (0:00:00.406) 0:05:02.568 ********* 2026-03-10 01:21:36.243958 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:21:36.243966 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:21:36.243974 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:21:36.243981 | orchestrator | 2026-03-10 01:21:36.243989 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-10 01:21:36.243997 | orchestrator | Tuesday 10 March 2026 01:17:19 +0000 (0:00:01.341) 0:05:03.910 ********* 2026-03-10 01:21:36.244009 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-10 01:21:36.244018 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-10 01:21:36.244026 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-10 01:21:36.244034 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-10 01:21:36.244043 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-10 01:21:36.244051 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-10 01:21:36.244058 | orchestrator | 2026-03-10 01:21:36.244067 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-10 01:21:36.244075 | orchestrator | Tuesday 10 March 2026 01:17:23 +0000 (0:00:03.871) 0:05:07.781 ********* 2026-03-10 01:21:36.244082 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 01:21:36.244090 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 01:21:36.244098 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 01:21:36.244106 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-10 01:21:36.244114 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:21:36.244128 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-10 01:21:36.244135 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:21:36.244143 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-10 01:21:36.244151 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:21:36.244159 | orchestrator | 2026-03-10 01:21:36.244167 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-10 01:21:36.244174 | orchestrator | Tuesday 10 March 2026 01:17:27 +0000 (0:00:03.997) 0:05:11.778 ********* 2026-03-10 01:21:36.244182 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.244190 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.244198 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.244206 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-5, testbed-node-3, testbed-node-4 2026-03-10 01:21:36.244214 | orchestrator | 2026-03-10 01:21:36.244222 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-10 01:21:36.244230 | orchestrator | Tuesday 10 March 2026 01:17:29 +0000 (0:00:02.013) 0:05:13.792 ********* 2026-03-10 01:21:36.244238 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-10 01:21:36.244246 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-10 01:21:36.244253 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-10 01:21:36.244261 | orchestrator | 2026-03-10 01:21:36.244269 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-10 01:21:36.244277 | orchestrator | Tuesday 10 March 2026 01:17:30 +0000 (0:00:01.329) 0:05:15.121 ********* 2026-03-10 01:21:36.244285 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.244293 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.244300 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.244308 | orchestrator | 2026-03-10 01:21:36.244316 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-10 01:21:36.244324 | orchestrator | Tuesday 10 March 2026 01:17:31 +0000 (0:00:00.386) 0:05:15.508 ********* 2026-03-10 01:21:36.244332 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.244340 | orchestrator | 2026-03-10 01:21:36.244348 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-10 01:21:36.244356 | orchestrator | Tuesday 10 March 2026 01:17:31 +0000 (0:00:00.134) 0:05:15.642 ********* 2026-03-10 01:21:36.244364 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.244371 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.244379 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.244387 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.244395 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.244403 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.244411 | orchestrator | 2026-03-10 01:21:36.244419 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-10 01:21:36.244427 | orchestrator | Tuesday 10 March 2026 01:17:32 +0000 (0:00:00.608) 0:05:16.250 ********* 2026-03-10 01:21:36.244434 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-10 01:21:36.244442 | orchestrator | 2026-03-10 01:21:36.244450 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-10 01:21:36.244458 | orchestrator | Tuesday 10 March 2026 01:17:33 +0000 (0:00:01.029) 0:05:17.280 ********* 2026-03-10 01:21:36.244470 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.244484 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.244496 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.244509 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.244521 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.244535 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.244548 | orchestrator | 2026-03-10 01:21:36.244561 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-10 01:21:36.244583 | orchestrator | Tuesday 10 March 2026 01:17:33 +0000 (0:00:00.636) 0:05:17.917 ********* 2026-03-10 01:21:36.244600 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244618 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244627 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244636 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244656 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244678 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244693 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244705 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244732 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244821 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244839 | orchestrator | 2026-03-10 01:21:36.244850 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-10 01:21:36.244863 | orchestrator | Tuesday 10 March 2026 01:17:38 +0000 (0:00:04.577) 0:05:22.494 ********* 2026-03-10 01:21:36.244878 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:21:36.244891 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:21:36.244919 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:21:36.244941 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:21:36.244950 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:21:36.244959 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:21:36.244967 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.244977 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.245013 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.245028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.245042 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.245055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.245068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.245081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.245109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.245123 | orchestrator | 2026-03-10 01:21:36.245131 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-10 01:21:36.245139 | orchestrator | Tuesday 10 March 2026 01:17:44 +0000 (0:00:06.485) 0:05:28.979 ********* 2026-03-10 01:21:36.245147 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.245155 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.245163 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.245171 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.245184 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.245192 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.245200 | orchestrator | 2026-03-10 01:21:36.245208 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-10 01:21:36.245216 | orchestrator | Tuesday 10 March 2026 01:17:46 +0000 (0:00:02.163) 0:05:31.143 ********* 2026-03-10 01:21:36.245223 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-10 01:21:36.245231 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-10 01:21:36.245239 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-10 01:21:36.245247 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-10 01:21:36.245255 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-10 01:21:36.245263 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.245271 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-10 01:21:36.245279 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-10 01:21:36.245287 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.245294 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-10 01:21:36.245303 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-10 01:21:36.245311 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.245319 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-10 01:21:36.245327 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-10 01:21:36.245335 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-10 01:21:36.245342 | orchestrator | 2026-03-10 01:21:36.245351 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-10 01:21:36.245365 | orchestrator | Tuesday 10 March 2026 01:17:50 +0000 (0:00:03.897) 0:05:35.040 ********* 2026-03-10 01:21:36.245377 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.245391 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.245405 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.245419 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.245436 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.245444 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.245452 | orchestrator | 2026-03-10 01:21:36.245460 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-10 01:21:36.245468 | orchestrator | Tuesday 10 March 2026 01:17:51 +0000 (0:00:00.596) 0:05:35.636 ********* 2026-03-10 01:21:36.245476 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-10 01:21:36.245484 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-10 01:21:36.245491 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-10 01:21:36.245499 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-10 01:21:36.245507 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-10 01:21:36.245515 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-10 01:21:36.245523 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-10 01:21:36.245530 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-10 01:21:36.245538 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-10 01:21:36.245546 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-10 01:21:36.245553 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.245561 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-10 01:21:36.245569 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.245576 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-10 01:21:36.245584 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.245596 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-10 01:21:36.245604 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-10 01:21:36.245612 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-10 01:21:36.245620 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-10 01:21:36.245632 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-10 01:21:36.245640 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-10 01:21:36.245648 | orchestrator | 2026-03-10 01:21:36.245656 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-10 01:21:36.245664 | orchestrator | Tuesday 10 March 2026 01:17:57 +0000 (0:00:05.564) 0:05:41.201 ********* 2026-03-10 01:21:36.245672 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-10 01:21:36.245679 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-10 01:21:36.245687 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-10 01:21:36.245695 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-10 01:21:36.245703 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-10 01:21:36.245716 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-10 01:21:36.245724 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-10 01:21:36.245732 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-10 01:21:36.245740 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-10 01:21:36.245747 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-10 01:21:36.245755 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-10 01:21:36.245813 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-10 01:21:36.245821 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-10 01:21:36.245829 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.245837 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-10 01:21:36.245845 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-10 01:21:36.245853 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.245860 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-10 01:21:36.245868 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-10 01:21:36.245876 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-10 01:21:36.245884 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.245892 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-10 01:21:36.245899 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-10 01:21:36.245907 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-10 01:21:36.245915 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-10 01:21:36.245923 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-10 01:21:36.245930 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-10 01:21:36.245938 | orchestrator | 2026-03-10 01:21:36.245946 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-10 01:21:36.245953 | orchestrator | Tuesday 10 March 2026 01:18:04 +0000 (0:00:07.193) 0:05:48.394 ********* 2026-03-10 01:21:36.245961 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.245969 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.245977 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.245984 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.245992 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.246000 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.246007 | orchestrator | 2026-03-10 01:21:36.246051 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-10 01:21:36.246062 | orchestrator | Tuesday 10 March 2026 01:18:05 +0000 (0:00:00.889) 0:05:49.283 ********* 2026-03-10 01:21:36.246070 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.246078 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.246085 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.246093 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.246101 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.246110 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.246118 | orchestrator | 2026-03-10 01:21:36.246126 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-10 01:21:36.246141 | orchestrator | Tuesday 10 March 2026 01:18:05 +0000 (0:00:00.676) 0:05:49.960 ********* 2026-03-10 01:21:36.246155 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.246163 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.246171 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.246179 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:21:36.246186 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:21:36.246194 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:21:36.246202 | orchestrator | 2026-03-10 01:21:36.246210 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-10 01:21:36.246216 | orchestrator | Tuesday 10 March 2026 01:18:08 +0000 (0:00:02.358) 0:05:52.319 ********* 2026-03-10 01:21:36.246242 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:21:36.246250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:21:36.246257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.246265 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.246272 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:21:36.246282 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:21:36.246300 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.246308 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.246315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-10 01:21:36.246322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-10 01:21:36.246329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.246337 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.246343 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:21:36.246359 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.246366 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.246380 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:21:36.246387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.246394 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.246401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-10 01:21:36.246408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-10 01:21:36.246415 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.246422 | orchestrator | 2026-03-10 01:21:36.246429 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-10 01:21:36.246436 | orchestrator | Tuesday 10 March 2026 01:18:09 +0000 (0:00:01.759) 0:05:54.078 ********* 2026-03-10 01:21:36.246443 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-10 01:21:36.246449 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-10 01:21:36.246456 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.246462 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-10 01:21:36.246473 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-10 01:21:36.246480 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.246487 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-10 01:21:36.246493 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-10 01:21:36.246500 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.246507 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-10 01:21:36.246513 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-10 01:21:36.246520 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.246526 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-10 01:21:36.246533 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-10 01:21:36.246540 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.246546 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-10 01:21:36.246553 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-10 01:21:36.246559 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.246566 | orchestrator | 2026-03-10 01:21:36.246573 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-10 01:21:36.246580 | orchestrator | Tuesday 10 March 2026 01:18:10 +0000 (0:00:00.964) 0:05:55.043 ********* 2026-03-10 01:21:36.246594 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246602 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246609 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246620 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246628 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246638 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246650 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246658 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246672 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246705 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246712 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246719 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:36.246726 | orchestrator | 2026-03-10 01:21:36.246733 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-10 01:21:36.246740 | orchestrator | Tuesday 10 March 2026 01:18:13 +0000 (0:00:02.989) 0:05:58.032 ********* 2026-03-10 01:21:36.246751 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.246772 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.246779 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.246786 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.246792 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.246799 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.246805 | orchestrator | 2026-03-10 01:21:36.246812 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-10 01:21:36.246818 | orchestrator | Tuesday 10 March 2026 01:18:14 +0000 (0:00:00.842) 0:05:58.874 ********* 2026-03-10 01:21:36.246825 | orchestrator | 2026-03-10 01:21:36.246832 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-10 01:21:36.246838 | orchestrator | Tuesday 10 March 2026 01:18:14 +0000 (0:00:00.148) 0:05:59.022 ********* 2026-03-10 01:21:36.246845 | orchestrator | 2026-03-10 01:21:36.246852 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-10 01:21:36.246858 | orchestrator | Tuesday 10 March 2026 01:18:15 +0000 (0:00:00.145) 0:05:59.168 ********* 2026-03-10 01:21:36.246865 | orchestrator | 2026-03-10 01:21:36.246871 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-10 01:21:36.246878 | orchestrator | Tuesday 10 March 2026 01:18:15 +0000 (0:00:00.137) 0:05:59.305 ********* 2026-03-10 01:21:36.246885 | orchestrator | 2026-03-10 01:21:36.246891 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-10 01:21:36.246898 | orchestrator | Tuesday 10 March 2026 01:18:15 +0000 (0:00:00.356) 0:05:59.662 ********* 2026-03-10 01:21:36.246904 | orchestrator | 2026-03-10 01:21:36.246911 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-10 01:21:36.246917 | orchestrator | Tuesday 10 March 2026 01:18:15 +0000 (0:00:00.133) 0:05:59.795 ********* 2026-03-10 01:21:36.246924 | orchestrator | 2026-03-10 01:21:36.246931 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-10 01:21:36.246937 | orchestrator | Tuesday 10 March 2026 01:18:15 +0000 (0:00:00.136) 0:05:59.932 ********* 2026-03-10 01:21:36.246944 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.246950 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:36.246957 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:36.246964 | orchestrator | 2026-03-10 01:21:36.246970 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-10 01:21:36.246977 | orchestrator | Tuesday 10 March 2026 01:18:28 +0000 (0:00:12.525) 0:06:12.458 ********* 2026-03-10 01:21:36.246984 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.246990 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:36.246997 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:36.247003 | orchestrator | 2026-03-10 01:21:36.247010 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-10 01:21:36.247017 | orchestrator | Tuesday 10 March 2026 01:18:47 +0000 (0:00:18.892) 0:06:31.350 ********* 2026-03-10 01:21:36.247023 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:21:36.247030 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:21:36.247040 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:21:36.247046 | orchestrator | 2026-03-10 01:21:36.247053 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-10 01:21:36.247060 | orchestrator | Tuesday 10 March 2026 01:19:11 +0000 (0:00:23.896) 0:06:55.247 ********* 2026-03-10 01:21:36.247066 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:21:36.247073 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:21:36.247080 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:21:36.247086 | orchestrator | 2026-03-10 01:21:36.247093 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-10 01:21:36.247099 | orchestrator | Tuesday 10 March 2026 01:19:48 +0000 (0:00:37.388) 0:07:32.636 ********* 2026-03-10 01:21:36.247110 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:21:36.247117 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:21:36.247128 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:21:36.247135 | orchestrator | 2026-03-10 01:21:36.247142 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-10 01:21:36.247148 | orchestrator | Tuesday 10 March 2026 01:19:49 +0000 (0:00:01.000) 0:07:33.636 ********* 2026-03-10 01:21:36.247155 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:21:36.247165 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:21:36.247171 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:21:36.247178 | orchestrator | 2026-03-10 01:21:36.247185 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-10 01:21:36.247191 | orchestrator | Tuesday 10 March 2026 01:19:50 +0000 (0:00:00.893) 0:07:34.530 ********* 2026-03-10 01:21:36.247198 | orchestrator | changed: [testbed-node-3] 2026-03-10 01:21:36.247205 | orchestrator | changed: [testbed-node-4] 2026-03-10 01:21:36.247212 | orchestrator | changed: [testbed-node-5] 2026-03-10 01:21:36.247218 | orchestrator | 2026-03-10 01:21:36.247225 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-10 01:21:36.247232 | orchestrator | Tuesday 10 March 2026 01:20:15 +0000 (0:00:25.440) 0:07:59.970 ********* 2026-03-10 01:21:36.247238 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.247245 | orchestrator | 2026-03-10 01:21:36.247252 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-10 01:21:36.247258 | orchestrator | Tuesday 10 March 2026 01:20:15 +0000 (0:00:00.144) 0:08:00.115 ********* 2026-03-10 01:21:36.247265 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.247272 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.247278 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.247285 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.247291 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.247298 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-10 01:21:36.247305 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:21:36.247312 | orchestrator | 2026-03-10 01:21:36.247318 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-10 01:21:36.247325 | orchestrator | Tuesday 10 March 2026 01:20:39 +0000 (0:00:23.320) 0:08:23.435 ********* 2026-03-10 01:21:36.247332 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.247338 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.247345 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.247352 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.247358 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.247365 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.247371 | orchestrator | 2026-03-10 01:21:36.247378 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-10 01:21:36.247385 | orchestrator | Tuesday 10 March 2026 01:20:50 +0000 (0:00:10.747) 0:08:34.182 ********* 2026-03-10 01:21:36.247391 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.247398 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.247405 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.247411 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.247418 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.247425 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2026-03-10 01:21:36.247431 | orchestrator | 2026-03-10 01:21:36.247438 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-10 01:21:36.247445 | orchestrator | Tuesday 10 March 2026 01:20:54 +0000 (0:00:04.326) 0:08:38.509 ********* 2026-03-10 01:21:36.247451 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:21:36.247458 | orchestrator | 2026-03-10 01:21:36.247464 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-10 01:21:36.247471 | orchestrator | Tuesday 10 March 2026 01:21:09 +0000 (0:00:14.680) 0:08:53.189 ********* 2026-03-10 01:21:36.247485 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:21:36.247492 | orchestrator | 2026-03-10 01:21:36.247498 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-10 01:21:36.247505 | orchestrator | Tuesday 10 March 2026 01:21:10 +0000 (0:00:01.784) 0:08:54.974 ********* 2026-03-10 01:21:36.247512 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.247518 | orchestrator | 2026-03-10 01:21:36.247525 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-10 01:21:36.247531 | orchestrator | Tuesday 10 March 2026 01:21:12 +0000 (0:00:01.592) 0:08:56.566 ********* 2026-03-10 01:21:36.247538 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2026-03-10 01:21:36.247545 | orchestrator | 2026-03-10 01:21:36.247552 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-10 01:21:36.247558 | orchestrator | Tuesday 10 March 2026 01:21:25 +0000 (0:00:13.306) 0:09:09.872 ********* 2026-03-10 01:21:36.247565 | orchestrator | ok: [testbed-node-3] 2026-03-10 01:21:36.247571 | orchestrator | ok: [testbed-node-4] 2026-03-10 01:21:36.247578 | orchestrator | ok: [testbed-node-5] 2026-03-10 01:21:36.247585 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:36.247591 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:21:36.247598 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:21:36.247605 | orchestrator | 2026-03-10 01:21:36.247611 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-10 01:21:36.247618 | orchestrator | 2026-03-10 01:21:36.247628 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-10 01:21:36.247635 | orchestrator | Tuesday 10 March 2026 01:21:27 +0000 (0:00:02.005) 0:09:11.878 ********* 2026-03-10 01:21:36.247642 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:36.247648 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:36.247655 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:36.247662 | orchestrator | 2026-03-10 01:21:36.247668 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-10 01:21:36.247675 | orchestrator | 2026-03-10 01:21:36.247681 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-10 01:21:36.247688 | orchestrator | Tuesday 10 March 2026 01:21:28 +0000 (0:00:01.263) 0:09:13.141 ********* 2026-03-10 01:21:36.247698 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.247705 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.247712 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.247719 | orchestrator | 2026-03-10 01:21:36.247726 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-10 01:21:36.247732 | orchestrator | 2026-03-10 01:21:36.247739 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-10 01:21:36.247746 | orchestrator | Tuesday 10 March 2026 01:21:29 +0000 (0:00:00.527) 0:09:13.669 ********* 2026-03-10 01:21:36.247752 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-10 01:21:36.247771 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-10 01:21:36.247778 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-10 01:21:36.247785 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-10 01:21:36.247792 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-10 01:21:36.247798 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-10 01:21:36.247805 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-10 01:21:36.247811 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-10 01:21:36.247818 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-10 01:21:36.247824 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-10 01:21:36.247831 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-10 01:21:36.247837 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-10 01:21:36.247848 | orchestrator | skipping: [testbed-node-3] 2026-03-10 01:21:36.247855 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-10 01:21:36.247862 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-10 01:21:36.247868 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-10 01:21:36.247874 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-10 01:21:36.247881 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-10 01:21:36.247887 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-10 01:21:36.247894 | orchestrator | skipping: [testbed-node-4] 2026-03-10 01:21:36.247901 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-10 01:21:36.247907 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-10 01:21:36.247914 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-10 01:21:36.247920 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-10 01:21:36.247927 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-10 01:21:36.247933 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-10 01:21:36.247940 | orchestrator | skipping: [testbed-node-5] 2026-03-10 01:21:36.247947 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-10 01:21:36.247953 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-10 01:21:36.247960 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-10 01:21:36.247966 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-10 01:21:36.247973 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-10 01:21:36.247979 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-10 01:21:36.247986 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.247992 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.247999 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-10 01:21:36.248005 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-10 01:21:36.248012 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-10 01:21:36.248018 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-10 01:21:36.248025 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-10 01:21:36.248032 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-10 01:21:36.248038 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.248045 | orchestrator | 2026-03-10 01:21:36.248051 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-10 01:21:36.248058 | orchestrator | 2026-03-10 01:21:36.248064 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-10 01:21:36.248071 | orchestrator | Tuesday 10 March 2026 01:21:31 +0000 (0:00:01.617) 0:09:15.287 ********* 2026-03-10 01:21:36.248077 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-10 01:21:36.248084 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-10 01:21:36.248091 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.248097 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-10 01:21:36.248104 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-10 01:21:36.248110 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.248117 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-10 01:21:36.248127 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-10 01:21:36.248133 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.248140 | orchestrator | 2026-03-10 01:21:36.248147 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-10 01:21:36.248153 | orchestrator | 2026-03-10 01:21:36.248160 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-10 01:21:36.248170 | orchestrator | Tuesday 10 March 2026 01:21:32 +0000 (0:00:00.932) 0:09:16.220 ********* 2026-03-10 01:21:36.248177 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.248184 | orchestrator | 2026-03-10 01:21:36.248190 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-10 01:21:36.248197 | orchestrator | 2026-03-10 01:21:36.248208 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-10 01:21:36.248215 | orchestrator | Tuesday 10 March 2026 01:21:32 +0000 (0:00:00.800) 0:09:17.020 ********* 2026-03-10 01:21:36.248222 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:36.248228 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:36.248235 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:36.248241 | orchestrator | 2026-03-10 01:21:36.248248 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:21:36.248255 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:21:36.248262 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2026-03-10 01:21:36.248269 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-10 01:21:36.248276 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-10 01:21:36.248283 | orchestrator | testbed-node-3 : ok=40  changed=27  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-10 01:21:36.248289 | orchestrator | testbed-node-4 : ok=44  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2026-03-10 01:21:36.248296 | orchestrator | testbed-node-5 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-10 01:21:36.248302 | orchestrator | 2026-03-10 01:21:36.248309 | orchestrator | 2026-03-10 01:21:36.248316 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:21:36.248322 | orchestrator | Tuesday 10 March 2026 01:21:33 +0000 (0:00:00.690) 0:09:17.710 ********* 2026-03-10 01:21:36.248329 | orchestrator | =============================================================================== 2026-03-10 01:21:36.248336 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 37.39s 2026-03-10 01:21:36.248342 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 33.40s 2026-03-10 01:21:36.248349 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 26.18s 2026-03-10 01:21:36.248355 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.44s 2026-03-10 01:21:36.248362 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 23.90s 2026-03-10 01:21:36.248369 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 23.32s 2026-03-10 01:21:36.248375 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 20.44s 2026-03-10 01:21:36.248382 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 18.89s 2026-03-10 01:21:36.248389 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 17.28s 2026-03-10 01:21:36.248395 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.95s 2026-03-10 01:21:36.248402 | orchestrator | nova-cell : Create cell ------------------------------------------------ 15.00s 2026-03-10 01:21:36.248409 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.68s 2026-03-10 01:21:36.248415 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.25s 2026-03-10 01:21:36.248422 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 14.09s 2026-03-10 01:21:36.248432 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 13.31s 2026-03-10 01:21:36.248439 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.53s 2026-03-10 01:21:36.248445 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------ 10.75s 2026-03-10 01:21:36.248452 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 9.44s 2026-03-10 01:21:36.248458 | orchestrator | nova : Restart nova-api container --------------------------------------- 8.92s 2026-03-10 01:21:36.248465 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.75s 2026-03-10 01:21:36.248472 | orchestrator | 2026-03-10 01:21:36 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:39.277482 | orchestrator | 2026-03-10 01:21:39 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:39.277595 | orchestrator | 2026-03-10 01:21:39 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:42.311362 | orchestrator | 2026-03-10 01:21:42 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:42.311471 | orchestrator | 2026-03-10 01:21:42 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:45.354929 | orchestrator | 2026-03-10 01:21:45 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:45.355009 | orchestrator | 2026-03-10 01:21:45 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:48.397207 | orchestrator | 2026-03-10 01:21:48 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:48.397296 | orchestrator | 2026-03-10 01:21:48 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:51.448481 | orchestrator | 2026-03-10 01:21:51 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:51.448589 | orchestrator | 2026-03-10 01:21:51 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:54.505138 | orchestrator | 2026-03-10 01:21:54 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state STARTED 2026-03-10 01:21:54.505258 | orchestrator | 2026-03-10 01:21:54 | INFO  | Wait 1 second(s) until the next check 2026-03-10 01:21:57.556060 | orchestrator | 2026-03-10 01:21:57 | INFO  | Task f962c677-fa58-4ab1-828a-ac2059b72ca2 is in state SUCCESS 2026-03-10 01:21:57.557298 | orchestrator | 2026-03-10 01:21:57.557454 | orchestrator | 2026-03-10 01:21:57.557472 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-10 01:21:57.557482 | orchestrator | 2026-03-10 01:21:57.557492 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-10 01:21:57.557501 | orchestrator | Tuesday 10 March 2026 01:16:47 +0000 (0:00:00.303) 0:00:00.303 ********* 2026-03-10 01:21:57.557510 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:57.557520 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:21:57.557529 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:21:57.557537 | orchestrator | 2026-03-10 01:21:57.557546 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-10 01:21:57.557555 | orchestrator | Tuesday 10 March 2026 01:16:48 +0000 (0:00:00.295) 0:00:00.599 ********* 2026-03-10 01:21:57.557564 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-10 01:21:57.557573 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-10 01:21:57.557582 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-10 01:21:57.557591 | orchestrator | 2026-03-10 01:21:57.557605 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-10 01:21:57.557664 | orchestrator | 2026-03-10 01:21:57.557683 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-10 01:21:57.557698 | orchestrator | Tuesday 10 March 2026 01:16:48 +0000 (0:00:00.484) 0:00:01.084 ********* 2026-03-10 01:21:57.557741 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:21:57.557809 | orchestrator | 2026-03-10 01:21:57.557823 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-10 01:21:57.557837 | orchestrator | Tuesday 10 March 2026 01:16:49 +0000 (0:00:00.605) 0:00:01.689 ********* 2026-03-10 01:21:57.557853 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-10 01:21:57.557869 | orchestrator | 2026-03-10 01:21:57.557883 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-10 01:21:57.557896 | orchestrator | Tuesday 10 March 2026 01:16:53 +0000 (0:00:04.149) 0:00:05.838 ********* 2026-03-10 01:21:57.557910 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-10 01:21:57.557925 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-10 01:21:57.557939 | orchestrator | 2026-03-10 01:21:57.557953 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-10 01:21:57.557968 | orchestrator | Tuesday 10 March 2026 01:17:00 +0000 (0:00:06.581) 0:00:12.420 ********* 2026-03-10 01:21:57.557983 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-10 01:21:57.557999 | orchestrator | 2026-03-10 01:21:57.558079 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-10 01:21:57.558119 | orchestrator | Tuesday 10 March 2026 01:17:03 +0000 (0:00:03.711) 0:00:16.132 ********* 2026-03-10 01:21:57.558136 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-10 01:21:57.558253 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-10 01:21:57.558273 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-10 01:21:57.558288 | orchestrator | 2026-03-10 01:21:57.558305 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-10 01:21:57.558321 | orchestrator | Tuesday 10 March 2026 01:17:12 +0000 (0:00:09.169) 0:00:25.302 ********* 2026-03-10 01:21:57.558337 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-10 01:21:57.558353 | orchestrator | 2026-03-10 01:21:57.558369 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-10 01:21:57.558386 | orchestrator | Tuesday 10 March 2026 01:17:16 +0000 (0:00:03.710) 0:00:29.012 ********* 2026-03-10 01:21:57.558396 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-10 01:21:57.558405 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-10 01:21:57.558414 | orchestrator | 2026-03-10 01:21:57.558452 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-10 01:21:57.558463 | orchestrator | Tuesday 10 March 2026 01:17:24 +0000 (0:00:07.865) 0:00:36.878 ********* 2026-03-10 01:21:57.558495 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-10 01:21:57.558505 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-10 01:21:57.558514 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-10 01:21:57.558523 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-10 01:21:57.558532 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-10 01:21:57.558541 | orchestrator | 2026-03-10 01:21:57.558550 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-10 01:21:57.558558 | orchestrator | Tuesday 10 March 2026 01:17:41 +0000 (0:00:17.040) 0:00:53.919 ********* 2026-03-10 01:21:57.558568 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:21:57.558576 | orchestrator | 2026-03-10 01:21:57.558585 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-10 01:21:57.558594 | orchestrator | Tuesday 10 March 2026 01:17:42 +0000 (0:00:00.673) 0:00:54.593 ********* 2026-03-10 01:21:57.558617 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.558626 | orchestrator | 2026-03-10 01:21:57.558635 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-10 01:21:57.558644 | orchestrator | Tuesday 10 March 2026 01:17:48 +0000 (0:00:05.932) 0:01:00.525 ********* 2026-03-10 01:21:57.558655 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.558669 | orchestrator | 2026-03-10 01:21:57.558684 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-10 01:21:57.558720 | orchestrator | Tuesday 10 March 2026 01:17:53 +0000 (0:00:05.275) 0:01:05.801 ********* 2026-03-10 01:21:57.558736 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:57.558785 | orchestrator | 2026-03-10 01:21:57.558805 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-10 01:21:57.558820 | orchestrator | Tuesday 10 March 2026 01:17:57 +0000 (0:00:03.634) 0:01:09.435 ********* 2026-03-10 01:21:57.558835 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-10 01:21:57.558849 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-10 01:21:57.558888 | orchestrator | 2026-03-10 01:21:57.558903 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-10 01:21:57.558917 | orchestrator | Tuesday 10 March 2026 01:18:07 +0000 (0:00:10.848) 0:01:20.284 ********* 2026-03-10 01:21:57.558933 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-10 01:21:57.558949 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-10 01:21:57.558984 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-10 01:21:57.559004 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-10 01:21:57.559018 | orchestrator | 2026-03-10 01:21:57.559031 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-10 01:21:57.559040 | orchestrator | Tuesday 10 March 2026 01:18:24 +0000 (0:00:16.943) 0:01:37.228 ********* 2026-03-10 01:21:57.559049 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.559058 | orchestrator | 2026-03-10 01:21:57.559066 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-10 01:21:57.559075 | orchestrator | Tuesday 10 March 2026 01:18:29 +0000 (0:00:04.869) 0:01:42.097 ********* 2026-03-10 01:21:57.559084 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.559093 | orchestrator | 2026-03-10 01:21:57.559102 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-10 01:21:57.559111 | orchestrator | Tuesday 10 March 2026 01:18:35 +0000 (0:00:05.940) 0:01:48.038 ********* 2026-03-10 01:21:57.559120 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:57.559129 | orchestrator | 2026-03-10 01:21:57.559137 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-10 01:21:57.559146 | orchestrator | Tuesday 10 March 2026 01:18:35 +0000 (0:00:00.236) 0:01:48.274 ********* 2026-03-10 01:21:57.559155 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:57.559163 | orchestrator | 2026-03-10 01:21:57.559173 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-10 01:21:57.559181 | orchestrator | Tuesday 10 March 2026 01:18:40 +0000 (0:00:04.803) 0:01:53.078 ********* 2026-03-10 01:21:57.559190 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:21:57.559199 | orchestrator | 2026-03-10 01:21:57.559208 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-10 01:21:57.559217 | orchestrator | Tuesday 10 March 2026 01:18:41 +0000 (0:00:01.145) 0:01:54.224 ********* 2026-03-10 01:21:57.559226 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.559235 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:57.559254 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:57.559262 | orchestrator | 2026-03-10 01:21:57.559272 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-10 01:21:57.559281 | orchestrator | Tuesday 10 March 2026 01:18:48 +0000 (0:00:06.218) 0:02:00.442 ********* 2026-03-10 01:21:57.559289 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:57.559298 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.559307 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:57.559315 | orchestrator | 2026-03-10 01:21:57.559324 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-10 01:21:57.559332 | orchestrator | Tuesday 10 March 2026 01:18:53 +0000 (0:00:05.305) 0:02:05.748 ********* 2026-03-10 01:21:57.559341 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.559356 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:57.559366 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:57.559374 | orchestrator | 2026-03-10 01:21:57.559383 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-10 01:21:57.559391 | orchestrator | Tuesday 10 March 2026 01:18:54 +0000 (0:00:00.915) 0:02:06.664 ********* 2026-03-10 01:21:57.559400 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:21:57.559409 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:57.559417 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:21:57.559426 | orchestrator | 2026-03-10 01:21:57.559434 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-10 01:21:57.559445 | orchestrator | Tuesday 10 March 2026 01:18:56 +0000 (0:00:02.150) 0:02:08.814 ********* 2026-03-10 01:21:57.559458 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.559467 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:57.559476 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:57.559485 | orchestrator | 2026-03-10 01:21:57.559493 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-10 01:21:57.559502 | orchestrator | Tuesday 10 March 2026 01:18:57 +0000 (0:00:01.304) 0:02:10.118 ********* 2026-03-10 01:21:57.559510 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.559518 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:57.559527 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:57.559536 | orchestrator | 2026-03-10 01:21:57.559544 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-10 01:21:57.559552 | orchestrator | Tuesday 10 March 2026 01:18:59 +0000 (0:00:01.195) 0:02:11.314 ********* 2026-03-10 01:21:57.559561 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.559569 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:57.559578 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:57.559586 | orchestrator | 2026-03-10 01:21:57.559615 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-10 01:21:57.559624 | orchestrator | Tuesday 10 March 2026 01:19:01 +0000 (0:00:02.151) 0:02:13.465 ********* 2026-03-10 01:21:57.559633 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.559642 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:57.559650 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:57.559659 | orchestrator | 2026-03-10 01:21:57.559667 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-10 01:21:57.559676 | orchestrator | Tuesday 10 March 2026 01:19:02 +0000 (0:00:01.838) 0:02:15.304 ********* 2026-03-10 01:21:57.559684 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:57.559693 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:21:57.559702 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:21:57.559710 | orchestrator | 2026-03-10 01:21:57.559719 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-10 01:21:57.559727 | orchestrator | Tuesday 10 March 2026 01:19:03 +0000 (0:00:00.699) 0:02:16.003 ********* 2026-03-10 01:21:57.559736 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:21:57.559745 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:21:57.559809 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:57.559819 | orchestrator | 2026-03-10 01:21:57.559834 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-10 01:21:57.559844 | orchestrator | Tuesday 10 March 2026 01:19:07 +0000 (0:00:03.783) 0:02:19.786 ********* 2026-03-10 01:21:57.559853 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:21:57.559861 | orchestrator | 2026-03-10 01:21:57.559870 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-10 01:21:57.559879 | orchestrator | Tuesday 10 March 2026 01:19:08 +0000 (0:00:00.826) 0:02:20.613 ********* 2026-03-10 01:21:57.559887 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:57.559896 | orchestrator | 2026-03-10 01:21:57.559905 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-10 01:21:57.559914 | orchestrator | Tuesday 10 March 2026 01:19:12 +0000 (0:00:04.236) 0:02:24.849 ********* 2026-03-10 01:21:57.559922 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:57.559931 | orchestrator | 2026-03-10 01:21:57.559940 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-10 01:21:57.559948 | orchestrator | Tuesday 10 March 2026 01:19:16 +0000 (0:00:03.655) 0:02:28.505 ********* 2026-03-10 01:21:57.559957 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-10 01:21:57.559966 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-10 01:21:57.559974 | orchestrator | 2026-03-10 01:21:57.559983 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-10 01:21:57.559992 | orchestrator | Tuesday 10 March 2026 01:19:23 +0000 (0:00:07.344) 0:02:35.849 ********* 2026-03-10 01:21:57.560000 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:57.560009 | orchestrator | 2026-03-10 01:21:57.560018 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-10 01:21:57.560026 | orchestrator | Tuesday 10 March 2026 01:19:27 +0000 (0:00:03.692) 0:02:39.542 ********* 2026-03-10 01:21:57.560035 | orchestrator | ok: [testbed-node-0] 2026-03-10 01:21:57.560043 | orchestrator | ok: [testbed-node-1] 2026-03-10 01:21:57.560052 | orchestrator | ok: [testbed-node-2] 2026-03-10 01:21:57.560061 | orchestrator | 2026-03-10 01:21:57.560070 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-10 01:21:57.560078 | orchestrator | Tuesday 10 March 2026 01:19:27 +0000 (0:00:00.348) 0:02:39.891 ********* 2026-03-10 01:21:57.560095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.560118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.560134 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.560145 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.560156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.560165 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.560179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560210 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560220 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560229 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560239 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560249 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560271 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560285 | orchestrator | 2026-03-10 01:21:57.560294 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-10 01:21:57.560303 | orchestrator | Tuesday 10 March 2026 01:19:30 +0000 (0:00:02.553) 0:02:42.445 ********* 2026-03-10 01:21:57.560312 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:57.560321 | orchestrator | 2026-03-10 01:21:57.560334 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-10 01:21:57.560344 | orchestrator | Tuesday 10 March 2026 01:19:30 +0000 (0:00:00.152) 0:02:42.598 ********* 2026-03-10 01:21:57.560353 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:57.560362 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:57.560371 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:57.560379 | orchestrator | 2026-03-10 01:21:57.560388 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-10 01:21:57.560397 | orchestrator | Tuesday 10 March 2026 01:19:30 +0000 (0:00:00.527) 0:02:43.125 ********* 2026-03-10 01:21:57.560406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:21:57.560415 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:21:57.560425 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.560439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.560453 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:21:57.560463 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:57.560478 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:21:57.560488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:21:57.560497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.560506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.560515 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:21:57.560530 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:21:57.560546 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:21:57.560556 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:57.560565 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.560598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.560608 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:21:57.560618 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:57.560626 | orchestrator | 2026-03-10 01:21:57.560635 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-10 01:21:57.560645 | orchestrator | Tuesday 10 March 2026 01:19:31 +0000 (0:00:00.819) 0:02:43.945 ********* 2026-03-10 01:21:57.560654 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-10 01:21:57.560663 | orchestrator | 2026-03-10 01:21:57.560672 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-10 01:21:57.560680 | orchestrator | Tuesday 10 March 2026 01:19:32 +0000 (0:00:00.555) 0:02:44.501 ********* 2026-03-10 01:21:57.560704 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.560720 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.560730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.560739 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.560815 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.560836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.560858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560893 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560929 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560952 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.560962 | orchestrator | 2026-03-10 01:21:57.560971 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-10 01:21:57.560979 | orchestrator | Tuesday 10 March 2026 01:19:37 +0000 (0:00:05.654) 0:02:50.155 ********* 2026-03-10 01:21:57.560988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:21:57.561002 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:21:57.561017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.561040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.561062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:21:57.561076 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:57.561099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:21:57.561115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:21:57.561130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.561145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.561169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:21:57.561183 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:57.561202 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:21:57.561216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:21:57.561238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.561252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.561266 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:21:57.561288 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:57.561301 | orchestrator | 2026-03-10 01:21:57.561315 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-10 01:21:57.561331 | orchestrator | Tuesday 10 March 2026 01:19:38 +0000 (0:00:00.728) 0:02:50.883 ********* 2026-03-10 01:21:57.561353 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:21:57.561371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:21:57.561390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.561418 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.561438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:21:57.561449 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:57.561468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:21:57.561478 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:21:57.561494 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.561504 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.561521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:21:57.561532 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:57.561543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-10 01:21:57.561559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-10 01:21:57.561570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.561589 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-10 01:21:57.561600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-10 01:21:57.561610 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:57.561620 | orchestrator | 2026-03-10 01:21:57.561630 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-10 01:21:57.561639 | orchestrator | Tuesday 10 March 2026 01:19:39 +0000 (0:00:00.931) 0:02:51.815 ********* 2026-03-10 01:21:57.561659 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.561675 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.561686 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.561701 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.561711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.561721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.561738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.561783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.561795 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.561805 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.561820 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.561830 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.561847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.561858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.561873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.561884 | orchestrator | 2026-03-10 01:21:57.561894 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-10 01:21:57.561903 | orchestrator | Tuesday 10 March 2026 01:19:44 +0000 (0:00:05.193) 0:02:57.008 ********* 2026-03-10 01:21:57.561913 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-10 01:21:57.562011 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-10 01:21:57.562072 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-10 01:21:57.562088 | orchestrator | 2026-03-10 01:21:57.562104 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-10 01:21:57.562120 | orchestrator | Tuesday 10 March 2026 01:19:46 +0000 (0:00:02.236) 0:02:59.244 ********* 2026-03-10 01:21:57.562137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.562163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.562181 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.562211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.562243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.562262 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.562280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.562306 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.562324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.562354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.562373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.562399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.562417 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.562432 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.562455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.562473 | orchestrator | 2026-03-10 01:21:57.562489 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-10 01:21:57.562505 | orchestrator | Tuesday 10 March 2026 01:20:07 +0000 (0:00:20.439) 0:03:19.683 ********* 2026-03-10 01:21:57.562532 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.562549 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:57.562566 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:57.562582 | orchestrator | 2026-03-10 01:21:57.562598 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-10 01:21:57.562616 | orchestrator | Tuesday 10 March 2026 01:20:08 +0000 (0:00:01.481) 0:03:21.165 ********* 2026-03-10 01:21:57.562634 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-10 01:21:57.562650 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-10 01:21:57.562666 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-10 01:21:57.562683 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-10 01:21:57.562699 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-10 01:21:57.562716 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-10 01:21:57.562733 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-10 01:21:57.562816 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-10 01:21:57.562829 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-10 01:21:57.562839 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-10 01:21:57.562849 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-10 01:21:57.562858 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-10 01:21:57.562868 | orchestrator | 2026-03-10 01:21:57.562877 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-10 01:21:57.562887 | orchestrator | Tuesday 10 March 2026 01:20:14 +0000 (0:00:05.423) 0:03:26.588 ********* 2026-03-10 01:21:57.562897 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-10 01:21:57.562906 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-10 01:21:57.562915 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-10 01:21:57.562925 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-10 01:21:57.562934 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-10 01:21:57.562944 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-10 01:21:57.562953 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-10 01:21:57.562963 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-10 01:21:57.562972 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-10 01:21:57.562982 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-10 01:21:57.562991 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-10 01:21:57.563001 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-10 01:21:57.563010 | orchestrator | 2026-03-10 01:21:57.563029 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-10 01:21:57.563039 | orchestrator | Tuesday 10 March 2026 01:20:21 +0000 (0:00:07.714) 0:03:34.302 ********* 2026-03-10 01:21:57.563049 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-10 01:21:57.563059 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-10 01:21:57.563068 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-10 01:21:57.563078 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-10 01:21:57.563087 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-10 01:21:57.563097 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-10 01:21:57.563106 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-10 01:21:57.563116 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-10 01:21:57.563126 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-10 01:21:57.563144 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-10 01:21:57.563154 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-10 01:21:57.563164 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-10 01:21:57.563173 | orchestrator | 2026-03-10 01:21:57.563183 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-10 01:21:57.563193 | orchestrator | Tuesday 10 March 2026 01:20:27 +0000 (0:00:05.365) 0:03:39.667 ********* 2026-03-10 01:21:57.563209 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.563218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.563227 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-10 01:21:57.563241 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.563250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.563263 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-10 01:21:57.563279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.563288 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.563296 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.563305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.563318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.563332 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-10 01:21:57.563343 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.563352 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.563360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-10 01:21:57.563369 | orchestrator | 2026-03-10 01:21:57.563377 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-10 01:21:57.563385 | orchestrator | Tuesday 10 March 2026 01:20:31 +0000 (0:00:03.780) 0:03:43.448 ********* 2026-03-10 01:21:57.563393 | orchestrator | skipping: [testbed-node-0] 2026-03-10 01:21:57.563401 | orchestrator | skipping: [testbed-node-1] 2026-03-10 01:21:57.563409 | orchestrator | skipping: [testbed-node-2] 2026-03-10 01:21:57.563416 | orchestrator | 2026-03-10 01:21:57.563424 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-10 01:21:57.563432 | orchestrator | Tuesday 10 March 2026 01:20:31 +0000 (0:00:00.313) 0:03:43.762 ********* 2026-03-10 01:21:57.563440 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.563448 | orchestrator | 2026-03-10 01:21:57.563455 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-10 01:21:57.563464 | orchestrator | Tuesday 10 March 2026 01:20:33 +0000 (0:00:02.377) 0:03:46.139 ********* 2026-03-10 01:21:57.563471 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.563479 | orchestrator | 2026-03-10 01:21:57.563487 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-10 01:21:57.563495 | orchestrator | Tuesday 10 March 2026 01:20:36 +0000 (0:00:02.323) 0:03:48.462 ********* 2026-03-10 01:21:57.563503 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.563511 | orchestrator | 2026-03-10 01:21:57.563519 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-10 01:21:57.563532 | orchestrator | Tuesday 10 March 2026 01:20:38 +0000 (0:00:02.525) 0:03:50.987 ********* 2026-03-10 01:21:57.563541 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.563548 | orchestrator | 2026-03-10 01:21:57.563556 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-10 01:21:57.563564 | orchestrator | Tuesday 10 March 2026 01:20:42 +0000 (0:00:03.545) 0:03:54.533 ********* 2026-03-10 01:21:57.563572 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.563580 | orchestrator | 2026-03-10 01:21:57.563588 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-10 01:21:57.563595 | orchestrator | Tuesday 10 March 2026 01:21:06 +0000 (0:00:24.635) 0:04:19.169 ********* 2026-03-10 01:21:57.563603 | orchestrator | 2026-03-10 01:21:57.563615 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-10 01:21:57.563624 | orchestrator | Tuesday 10 March 2026 01:21:06 +0000 (0:00:00.073) 0:04:19.242 ********* 2026-03-10 01:21:57.563632 | orchestrator | 2026-03-10 01:21:57.563639 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-10 01:21:57.563649 | orchestrator | Tuesday 10 March 2026 01:21:07 +0000 (0:00:00.069) 0:04:19.312 ********* 2026-03-10 01:21:57.563662 | orchestrator | 2026-03-10 01:21:57.563676 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-10 01:21:57.563689 | orchestrator | Tuesday 10 March 2026 01:21:07 +0000 (0:00:00.083) 0:04:19.396 ********* 2026-03-10 01:21:57.563702 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.563715 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:57.563728 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:57.563741 | orchestrator | 2026-03-10 01:21:57.563773 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-10 01:21:57.563781 | orchestrator | Tuesday 10 March 2026 01:21:18 +0000 (0:00:11.009) 0:04:30.406 ********* 2026-03-10 01:21:57.563789 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.563797 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:57.563804 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:57.563812 | orchestrator | 2026-03-10 01:21:57.563820 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-10 01:21:57.563828 | orchestrator | Tuesday 10 March 2026 01:21:29 +0000 (0:00:11.697) 0:04:42.104 ********* 2026-03-10 01:21:57.563836 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.563844 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:57.563852 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:57.563859 | orchestrator | 2026-03-10 01:21:57.563867 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-10 01:21:57.563875 | orchestrator | Tuesday 10 March 2026 01:21:41 +0000 (0:00:11.264) 0:04:53.368 ********* 2026-03-10 01:21:57.563888 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.563896 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:57.563903 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:57.563911 | orchestrator | 2026-03-10 01:21:57.563919 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-10 01:21:57.563927 | orchestrator | Tuesday 10 March 2026 01:21:46 +0000 (0:00:05.347) 0:04:58.716 ********* 2026-03-10 01:21:57.563935 | orchestrator | changed: [testbed-node-0] 2026-03-10 01:21:57.563943 | orchestrator | changed: [testbed-node-1] 2026-03-10 01:21:57.563951 | orchestrator | changed: [testbed-node-2] 2026-03-10 01:21:57.563959 | orchestrator | 2026-03-10 01:21:57.563967 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:21:57.563975 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-10 01:21:57.563984 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 01:21:57.563992 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-10 01:21:57.564007 | orchestrator | 2026-03-10 01:21:57.564015 | orchestrator | 2026-03-10 01:21:57.564023 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:21:57.564031 | orchestrator | Tuesday 10 March 2026 01:21:57 +0000 (0:00:10.593) 0:05:09.309 ********* 2026-03-10 01:21:57.564038 | orchestrator | =============================================================================== 2026-03-10 01:21:57.564046 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 24.64s 2026-03-10 01:21:57.564054 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 20.44s 2026-03-10 01:21:57.564062 | orchestrator | octavia : Adding octavia related roles --------------------------------- 17.04s 2026-03-10 01:21:57.564069 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.94s 2026-03-10 01:21:57.564077 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 11.70s 2026-03-10 01:21:57.564085 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 11.26s 2026-03-10 01:21:57.564093 | orchestrator | octavia : Restart octavia-api container -------------------------------- 11.01s 2026-03-10 01:21:57.564101 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.85s 2026-03-10 01:21:57.564109 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.59s 2026-03-10 01:21:57.564116 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 9.17s 2026-03-10 01:21:57.564124 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.87s 2026-03-10 01:21:57.564132 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 7.71s 2026-03-10 01:21:57.564140 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.34s 2026-03-10 01:21:57.564148 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.58s 2026-03-10 01:21:57.564155 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.22s 2026-03-10 01:21:57.564163 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.94s 2026-03-10 01:21:57.564171 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.93s 2026-03-10 01:21:57.564179 | orchestrator | service-cert-copy : octavia | Copying over extra CA certificates -------- 5.65s 2026-03-10 01:21:57.564187 | orchestrator | octavia : Copying certificate files for octavia-worker ------------------ 5.42s 2026-03-10 01:21:57.564194 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.37s 2026-03-10 01:21:57.564207 | orchestrator | 2026-03-10 01:21:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:00.598802 | orchestrator | 2026-03-10 01:22:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:03.642109 | orchestrator | 2026-03-10 01:22:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:06.685247 | orchestrator | 2026-03-10 01:22:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:09.730543 | orchestrator | 2026-03-10 01:22:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:12.771308 | orchestrator | 2026-03-10 01:22:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:15.817443 | orchestrator | 2026-03-10 01:22:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:18.859526 | orchestrator | 2026-03-10 01:22:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:21.898086 | orchestrator | 2026-03-10 01:22:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:24.946442 | orchestrator | 2026-03-10 01:22:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:27.994396 | orchestrator | 2026-03-10 01:22:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:31.037547 | orchestrator | 2026-03-10 01:22:31 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:34.071436 | orchestrator | 2026-03-10 01:22:34 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:37.107466 | orchestrator | 2026-03-10 01:22:37 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:40.145686 | orchestrator | 2026-03-10 01:22:40 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:43.193088 | orchestrator | 2026-03-10 01:22:43 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:46.236480 | orchestrator | 2026-03-10 01:22:46 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:49.277253 | orchestrator | 2026-03-10 01:22:49 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:52.317195 | orchestrator | 2026-03-10 01:22:52 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:55.361223 | orchestrator | 2026-03-10 01:22:55 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-10 01:22:58.409012 | orchestrator | 2026-03-10 01:22:58.767984 | orchestrator | 2026-03-10 01:22:58.773480 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Tue Mar 10 01:22:58 UTC 2026 2026-03-10 01:22:58.773532 | orchestrator | 2026-03-10 01:22:59.221655 | orchestrator | ok: Runtime: 0:39:48.484945 2026-03-10 01:22:59.491891 | 2026-03-10 01:22:59.492035 | TASK [Bootstrap services] 2026-03-10 01:23:00.243343 | orchestrator | 2026-03-10 01:23:00.243464 | orchestrator | # BOOTSTRAP 2026-03-10 01:23:00.243474 | orchestrator | 2026-03-10 01:23:00.243479 | orchestrator | + set -e 2026-03-10 01:23:00.243484 | orchestrator | + echo 2026-03-10 01:23:00.243490 | orchestrator | + echo '# BOOTSTRAP' 2026-03-10 01:23:00.243498 | orchestrator | + echo 2026-03-10 01:23:00.243519 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-10 01:23:00.248551 | orchestrator | + set -e 2026-03-10 01:23:00.248570 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-10 01:23:05.184162 | orchestrator | 2026-03-10 01:23:05 | INFO  | It takes a moment until task 5af6f328-fb2f-43aa-9633-3e87059962ec (flavor-manager) has been started and output is visible here. 2026-03-10 01:23:13.459950 | orchestrator | 2026-03-10 01:23:08 | INFO  | Flavor SCS-1L-1 created 2026-03-10 01:23:13.460082 | orchestrator | 2026-03-10 01:23:08 | INFO  | Flavor SCS-1L-1-5 created 2026-03-10 01:23:13.460101 | orchestrator | 2026-03-10 01:23:08 | INFO  | Flavor SCS-1V-2 created 2026-03-10 01:23:13.460112 | orchestrator | 2026-03-10 01:23:09 | INFO  | Flavor SCS-1V-2-5 created 2026-03-10 01:23:13.460123 | orchestrator | 2026-03-10 01:23:09 | INFO  | Flavor SCS-1V-4 created 2026-03-10 01:23:13.460133 | orchestrator | 2026-03-10 01:23:09 | INFO  | Flavor SCS-1V-4-10 created 2026-03-10 01:23:13.460143 | orchestrator | 2026-03-10 01:23:09 | INFO  | Flavor SCS-1V-8 created 2026-03-10 01:23:13.460154 | orchestrator | 2026-03-10 01:23:09 | INFO  | Flavor SCS-1V-8-20 created 2026-03-10 01:23:13.460182 | orchestrator | 2026-03-10 01:23:10 | INFO  | Flavor SCS-2V-4 created 2026-03-10 01:23:13.460192 | orchestrator | 2026-03-10 01:23:10 | INFO  | Flavor SCS-2V-4-10 created 2026-03-10 01:23:13.460202 | orchestrator | 2026-03-10 01:23:10 | INFO  | Flavor SCS-2V-8 created 2026-03-10 01:23:13.460212 | orchestrator | 2026-03-10 01:23:10 | INFO  | Flavor SCS-2V-8-20 created 2026-03-10 01:23:13.460222 | orchestrator | 2026-03-10 01:23:10 | INFO  | Flavor SCS-2V-16 created 2026-03-10 01:23:13.460231 | orchestrator | 2026-03-10 01:23:10 | INFO  | Flavor SCS-2V-16-50 created 2026-03-10 01:23:13.460241 | orchestrator | 2026-03-10 01:23:10 | INFO  | Flavor SCS-4V-8 created 2026-03-10 01:23:13.460251 | orchestrator | 2026-03-10 01:23:11 | INFO  | Flavor SCS-4V-8-20 created 2026-03-10 01:23:13.460260 | orchestrator | 2026-03-10 01:23:11 | INFO  | Flavor SCS-4V-16 created 2026-03-10 01:23:13.460270 | orchestrator | 2026-03-10 01:23:11 | INFO  | Flavor SCS-4V-16-50 created 2026-03-10 01:23:13.460280 | orchestrator | 2026-03-10 01:23:11 | INFO  | Flavor SCS-4V-32 created 2026-03-10 01:23:13.460289 | orchestrator | 2026-03-10 01:23:11 | INFO  | Flavor SCS-4V-32-100 created 2026-03-10 01:23:13.460299 | orchestrator | 2026-03-10 01:23:11 | INFO  | Flavor SCS-8V-16 created 2026-03-10 01:23:13.460309 | orchestrator | 2026-03-10 01:23:11 | INFO  | Flavor SCS-8V-16-50 created 2026-03-10 01:23:13.460319 | orchestrator | 2026-03-10 01:23:12 | INFO  | Flavor SCS-8V-32 created 2026-03-10 01:23:13.460329 | orchestrator | 2026-03-10 01:23:12 | INFO  | Flavor SCS-8V-32-100 created 2026-03-10 01:23:13.460338 | orchestrator | 2026-03-10 01:23:12 | INFO  | Flavor SCS-16V-32 created 2026-03-10 01:23:13.460348 | orchestrator | 2026-03-10 01:23:12 | INFO  | Flavor SCS-16V-32-100 created 2026-03-10 01:23:13.460358 | orchestrator | 2026-03-10 01:23:12 | INFO  | Flavor SCS-2V-4-20s created 2026-03-10 01:23:13.460367 | orchestrator | 2026-03-10 01:23:12 | INFO  | Flavor SCS-4V-8-50s created 2026-03-10 01:23:13.460377 | orchestrator | 2026-03-10 01:23:13 | INFO  | Flavor SCS-4V-16-100s created 2026-03-10 01:23:13.460387 | orchestrator | 2026-03-10 01:23:13 | INFO  | Flavor SCS-8V-32-100s created 2026-03-10 01:23:16.120255 | orchestrator | 2026-03-10 01:23:16 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-10 01:23:16.130864 | orchestrator | 2026-03-10 01:23:16 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-10 01:23:16.204296 | orchestrator | 2026-03-10 01:23:16 | INFO  | Task 5814fd2c-c716-4287-ac68-e0757412bffc (bootstrap-basic) was prepared for execution. 2026-03-10 01:23:16.204402 | orchestrator | 2026-03-10 01:23:16 | INFO  | It takes a moment until task 5814fd2c-c716-4287-ac68-e0757412bffc (bootstrap-basic) has been started and output is visible here. 2026-03-10 01:24:05.337831 | orchestrator | 2026-03-10 01:24:05.337957 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-10 01:24:05.337972 | orchestrator | 2026-03-10 01:24:05.337981 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-10 01:24:05.337990 | orchestrator | Tuesday 10 March 2026 01:23:20 +0000 (0:00:00.070) 0:00:00.070 ********* 2026-03-10 01:24:05.337999 | orchestrator | ok: [localhost] 2026-03-10 01:24:05.338008 | orchestrator | 2026-03-10 01:24:05.338072 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-10 01:24:05.338081 | orchestrator | Tuesday 10 March 2026 01:23:22 +0000 (0:00:01.980) 0:00:02.050 ********* 2026-03-10 01:24:05.338092 | orchestrator | ok: [localhost] 2026-03-10 01:24:05.338101 | orchestrator | 2026-03-10 01:24:05.338109 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-10 01:24:05.338123 | orchestrator | Tuesday 10 March 2026 01:23:32 +0000 (0:00:10.012) 0:00:12.062 ********* 2026-03-10 01:24:05.338137 | orchestrator | changed: [localhost] 2026-03-10 01:24:05.338153 | orchestrator | 2026-03-10 01:24:05.338167 | orchestrator | TASK [Create public network] *************************************************** 2026-03-10 01:24:05.338181 | orchestrator | Tuesday 10 March 2026 01:23:40 +0000 (0:00:08.095) 0:00:20.158 ********* 2026-03-10 01:24:05.338195 | orchestrator | changed: [localhost] 2026-03-10 01:24:05.338208 | orchestrator | 2026-03-10 01:24:05.338226 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-10 01:24:05.338241 | orchestrator | Tuesday 10 March 2026 01:23:46 +0000 (0:00:05.343) 0:00:25.502 ********* 2026-03-10 01:24:05.338255 | orchestrator | changed: [localhost] 2026-03-10 01:24:05.338269 | orchestrator | 2026-03-10 01:24:05.338282 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-10 01:24:05.338296 | orchestrator | Tuesday 10 March 2026 01:23:52 +0000 (0:00:06.674) 0:00:32.176 ********* 2026-03-10 01:24:05.338310 | orchestrator | changed: [localhost] 2026-03-10 01:24:05.338325 | orchestrator | 2026-03-10 01:24:05.338339 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-10 01:24:05.338353 | orchestrator | Tuesday 10 March 2026 01:23:57 +0000 (0:00:04.285) 0:00:36.462 ********* 2026-03-10 01:24:05.338366 | orchestrator | changed: [localhost] 2026-03-10 01:24:05.338380 | orchestrator | 2026-03-10 01:24:05.338394 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-10 01:24:05.338427 | orchestrator | Tuesday 10 March 2026 01:24:01 +0000 (0:00:04.377) 0:00:40.839 ********* 2026-03-10 01:24:05.338443 | orchestrator | ok: [localhost] 2026-03-10 01:24:05.338456 | orchestrator | 2026-03-10 01:24:05.338470 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-10 01:24:05.338483 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-10 01:24:05.338499 | orchestrator | 2026-03-10 01:24:05.338513 | orchestrator | 2026-03-10 01:24:05.338526 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-10 01:24:05.338541 | orchestrator | Tuesday 10 March 2026 01:24:05 +0000 (0:00:03.688) 0:00:44.528 ********* 2026-03-10 01:24:05.338554 | orchestrator | =============================================================================== 2026-03-10 01:24:05.338568 | orchestrator | Get volume type LUKS --------------------------------------------------- 10.01s 2026-03-10 01:24:05.338610 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.10s 2026-03-10 01:24:05.338626 | orchestrator | Set public network to default ------------------------------------------- 6.67s 2026-03-10 01:24:05.338640 | orchestrator | Create public network --------------------------------------------------- 5.34s 2026-03-10 01:24:05.338655 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 4.38s 2026-03-10 01:24:05.338667 | orchestrator | Create public subnet ---------------------------------------------------- 4.29s 2026-03-10 01:24:05.338680 | orchestrator | Create manager role ----------------------------------------------------- 3.69s 2026-03-10 01:24:05.338693 | orchestrator | Gathering Facts --------------------------------------------------------- 1.98s 2026-03-10 01:24:08.102375 | orchestrator | 2026-03-10 01:24:08 | INFO  | It takes a moment until task f57dd30e-be36-4216-be53-ee2866c74478 (image-manager) has been started and output is visible here. 2026-03-10 01:24:52.397108 | orchestrator | 2026-03-10 01:24:10 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-10 01:24:52.397228 | orchestrator | 2026-03-10 01:24:11 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-10 01:24:52.397250 | orchestrator | 2026-03-10 01:24:11 | INFO  | Importing image Cirros 0.6.2 2026-03-10 01:24:52.397264 | orchestrator | 2026-03-10 01:24:11 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-10 01:24:52.397278 | orchestrator | 2026-03-10 01:24:13 | INFO  | Waiting for image to leave queued state... 2026-03-10 01:24:52.397293 | orchestrator | 2026-03-10 01:24:15 | INFO  | Waiting for import to complete... 2026-03-10 01:24:52.397306 | orchestrator | 2026-03-10 01:24:25 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-10 01:24:52.397320 | orchestrator | 2026-03-10 01:24:26 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-10 01:24:52.397333 | orchestrator | 2026-03-10 01:24:26 | INFO  | Setting internal_version = 0.6.2 2026-03-10 01:24:52.397346 | orchestrator | 2026-03-10 01:24:26 | INFO  | Setting image_original_user = cirros 2026-03-10 01:24:52.397358 | orchestrator | 2026-03-10 01:24:26 | INFO  | Adding tag os:cirros 2026-03-10 01:24:52.397372 | orchestrator | 2026-03-10 01:24:26 | INFO  | Setting property architecture: x86_64 2026-03-10 01:24:52.397384 | orchestrator | 2026-03-10 01:24:27 | INFO  | Setting property hw_disk_bus: scsi 2026-03-10 01:24:52.397395 | orchestrator | 2026-03-10 01:24:27 | INFO  | Setting property hw_rng_model: virtio 2026-03-10 01:24:52.397406 | orchestrator | 2026-03-10 01:24:27 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-10 01:24:52.397418 | orchestrator | 2026-03-10 01:24:28 | INFO  | Setting property hw_watchdog_action: reset 2026-03-10 01:24:52.397429 | orchestrator | 2026-03-10 01:24:28 | INFO  | Setting property hypervisor_type: qemu 2026-03-10 01:24:52.397454 | orchestrator | 2026-03-10 01:24:28 | INFO  | Setting property os_distro: cirros 2026-03-10 01:24:52.397465 | orchestrator | 2026-03-10 01:24:28 | INFO  | Setting property os_purpose: minimal 2026-03-10 01:24:52.397476 | orchestrator | 2026-03-10 01:24:29 | INFO  | Setting property replace_frequency: never 2026-03-10 01:24:52.397487 | orchestrator | 2026-03-10 01:24:29 | INFO  | Setting property uuid_validity: none 2026-03-10 01:24:52.397498 | orchestrator | 2026-03-10 01:24:29 | INFO  | Setting property provided_until: none 2026-03-10 01:24:52.397508 | orchestrator | 2026-03-10 01:24:29 | INFO  | Setting property image_description: Cirros 2026-03-10 01:24:52.397519 | orchestrator | 2026-03-10 01:24:30 | INFO  | Setting property image_name: Cirros 2026-03-10 01:24:52.397553 | orchestrator | 2026-03-10 01:24:30 | INFO  | Setting property internal_version: 0.6.2 2026-03-10 01:24:52.397564 | orchestrator | 2026-03-10 01:24:30 | INFO  | Setting property image_original_user: cirros 2026-03-10 01:24:52.397575 | orchestrator | 2026-03-10 01:24:30 | INFO  | Setting property os_version: 0.6.2 2026-03-10 01:24:52.397587 | orchestrator | 2026-03-10 01:24:31 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-10 01:24:52.397599 | orchestrator | 2026-03-10 01:24:31 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-10 01:24:52.397610 | orchestrator | 2026-03-10 01:24:31 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-10 01:24:52.397621 | orchestrator | 2026-03-10 01:24:31 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-10 01:24:52.397636 | orchestrator | 2026-03-10 01:24:31 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-10 01:24:52.397648 | orchestrator | 2026-03-10 01:24:31 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-10 01:24:52.397659 | orchestrator | 2026-03-10 01:24:32 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-10 01:24:52.397669 | orchestrator | 2026-03-10 01:24:32 | INFO  | Importing image Cirros 0.6.3 2026-03-10 01:24:52.397680 | orchestrator | 2026-03-10 01:24:32 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-10 01:24:52.397691 | orchestrator | 2026-03-10 01:24:33 | INFO  | Waiting for image to leave queued state... 2026-03-10 01:24:52.397701 | orchestrator | 2026-03-10 01:24:35 | INFO  | Waiting for import to complete... 2026-03-10 01:24:52.397785 | orchestrator | 2026-03-10 01:24:45 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-10 01:24:52.397807 | orchestrator | 2026-03-10 01:24:46 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-10 01:24:52.397824 | orchestrator | 2026-03-10 01:24:46 | INFO  | Setting internal_version = 0.6.3 2026-03-10 01:24:52.397842 | orchestrator | 2026-03-10 01:24:46 | INFO  | Setting image_original_user = cirros 2026-03-10 01:24:52.397860 | orchestrator | 2026-03-10 01:24:46 | INFO  | Adding tag os:cirros 2026-03-10 01:24:52.397876 | orchestrator | 2026-03-10 01:24:46 | INFO  | Setting property architecture: x86_64 2026-03-10 01:24:52.397894 | orchestrator | 2026-03-10 01:24:46 | INFO  | Setting property hw_disk_bus: scsi 2026-03-10 01:24:52.397911 | orchestrator | 2026-03-10 01:24:47 | INFO  | Setting property hw_rng_model: virtio 2026-03-10 01:24:52.397929 | orchestrator | 2026-03-10 01:24:47 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-10 01:24:52.397947 | orchestrator | 2026-03-10 01:24:47 | INFO  | Setting property hw_watchdog_action: reset 2026-03-10 01:24:52.397965 | orchestrator | 2026-03-10 01:24:47 | INFO  | Setting property hypervisor_type: qemu 2026-03-10 01:24:52.397983 | orchestrator | 2026-03-10 01:24:48 | INFO  | Setting property os_distro: cirros 2026-03-10 01:24:52.397999 | orchestrator | 2026-03-10 01:24:48 | INFO  | Setting property os_purpose: minimal 2026-03-10 01:24:52.398079 | orchestrator | 2026-03-10 01:24:48 | INFO  | Setting property replace_frequency: never 2026-03-10 01:24:52.398105 | orchestrator | 2026-03-10 01:24:48 | INFO  | Setting property uuid_validity: none 2026-03-10 01:24:52.398125 | orchestrator | 2026-03-10 01:24:49 | INFO  | Setting property provided_until: none 2026-03-10 01:24:52.398145 | orchestrator | 2026-03-10 01:24:49 | INFO  | Setting property image_description: Cirros 2026-03-10 01:24:52.398181 | orchestrator | 2026-03-10 01:24:49 | INFO  | Setting property image_name: Cirros 2026-03-10 01:24:52.398201 | orchestrator | 2026-03-10 01:24:50 | INFO  | Setting property internal_version: 0.6.3 2026-03-10 01:24:52.398220 | orchestrator | 2026-03-10 01:24:50 | INFO  | Setting property image_original_user: cirros 2026-03-10 01:24:52.398232 | orchestrator | 2026-03-10 01:24:50 | INFO  | Setting property os_version: 0.6.3 2026-03-10 01:24:52.398243 | orchestrator | 2026-03-10 01:24:50 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-10 01:24:52.398255 | orchestrator | 2026-03-10 01:24:51 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-10 01:24:52.398265 | orchestrator | 2026-03-10 01:24:51 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-10 01:24:52.398276 | orchestrator | 2026-03-10 01:24:51 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-10 01:24:52.398287 | orchestrator | 2026-03-10 01:24:51 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-10 01:24:52.738370 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-10 01:24:55.356544 | orchestrator | 2026-03-10 01:24:55 | INFO  | date: 2026-03-09 2026-03-10 01:24:55.356676 | orchestrator | 2026-03-10 01:24:55 | INFO  | image: octavia-amphora-haproxy-2024.2.20260309.qcow2 2026-03-10 01:24:55.356703 | orchestrator | 2026-03-10 01:24:55 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260309.qcow2 2026-03-10 01:24:55.356712 | orchestrator | 2026-03-10 01:24:55 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260309.qcow2.CHECKSUM 2026-03-10 01:24:55.514795 | orchestrator | 2026-03-10 01:24:55 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/4a5ad4ba3fd64d48834bebaf1663dbc6/work/logs" 2026-03-10 01:25:31.221345 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4a5ad4ba3fd64d48834bebaf1663dbc6/work/artifacts" 2026-03-10 01:25:31.482968 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4a5ad4ba3fd64d48834bebaf1663dbc6/work/docs" 2026-03-10 01:25:31.500710 | 2026-03-10 01:25:31.501180 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-10 01:25:32.400301 | orchestrator | changed: .d..t...... ./ 2026-03-10 01:25:32.400688 | orchestrator | changed: All items complete 2026-03-10 01:25:32.400747 | 2026-03-10 01:25:33.111098 | orchestrator | changed: .d..t...... ./ 2026-03-10 01:25:33.839639 | orchestrator | changed: .d..t...... ./ 2026-03-10 01:25:33.867318 | 2026-03-10 01:25:33.867482 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-10 01:25:33.901342 | orchestrator | skipping: Conditional result was False 2026-03-10 01:25:33.912619 | orchestrator | skipping: Conditional result was False 2026-03-10 01:25:33.930815 | 2026-03-10 01:25:33.930992 | PLAY RECAP 2026-03-10 01:25:33.931068 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-10 01:25:33.931106 | 2026-03-10 01:25:34.060073 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-10 01:25:34.061097 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-10 01:25:34.834608 | 2026-03-10 01:25:34.834763 | PLAY [Base post] 2026-03-10 01:25:34.851631 | 2026-03-10 01:25:34.851776 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-10 01:25:35.931767 | orchestrator | changed 2026-03-10 01:25:35.939465 | 2026-03-10 01:25:35.939601 | PLAY RECAP 2026-03-10 01:25:35.939663 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-10 01:25:35.939728 | 2026-03-10 01:25:36.078067 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-10 01:25:36.082331 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-10 01:25:36.971401 | 2026-03-10 01:25:36.971647 | PLAY [Base post-logs] 2026-03-10 01:25:36.982656 | 2026-03-10 01:25:36.982798 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-10 01:25:37.456454 | localhost | changed 2026-03-10 01:25:37.470954 | 2026-03-10 01:25:37.471130 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-10 01:25:37.497993 | localhost | ok 2026-03-10 01:25:37.502662 | 2026-03-10 01:25:37.502810 | TASK [Set zuul-log-path fact] 2026-03-10 01:25:37.520013 | localhost | ok 2026-03-10 01:25:37.531382 | 2026-03-10 01:25:37.531516 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-10 01:25:37.557615 | localhost | ok 2026-03-10 01:25:37.561711 | 2026-03-10 01:25:37.561832 | TASK [upload-logs : Create log directories] 2026-03-10 01:25:38.075772 | localhost | changed 2026-03-10 01:25:38.078659 | 2026-03-10 01:25:38.078778 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-10 01:25:38.623187 | localhost -> localhost | ok: Runtime: 0:00:00.007956 2026-03-10 01:25:38.627295 | 2026-03-10 01:25:38.627407 | TASK [upload-logs : Upload logs to log server] 2026-03-10 01:25:39.221562 | localhost | Output suppressed because no_log was given 2026-03-10 01:25:39.223471 | 2026-03-10 01:25:39.223579 | LOOP [upload-logs : Compress console log and json output] 2026-03-10 01:25:39.285640 | localhost | skipping: Conditional result was False 2026-03-10 01:25:39.291851 | localhost | skipping: Conditional result was False 2026-03-10 01:25:39.305601 | 2026-03-10 01:25:39.305797 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-10 01:25:39.365936 | localhost | skipping: Conditional result was False 2026-03-10 01:25:39.366537 | 2026-03-10 01:25:39.370224 | localhost | skipping: Conditional result was False 2026-03-10 01:25:39.377743 | 2026-03-10 01:25:39.377977 | LOOP [upload-logs : Upload console log and json output]