2026-03-11 00:00:07.513355 | Job console starting 2026-03-11 00:00:07.532777 | Updating git repos 2026-03-11 00:00:07.719263 | Cloning repos into workspace 2026-03-11 00:00:08.050664 | Restoring repo states 2026-03-11 00:00:08.076304 | Merging changes 2026-03-11 00:00:08.076326 | Checking out repos 2026-03-11 00:00:08.649105 | Preparing playbooks 2026-03-11 00:00:09.680341 | Running Ansible setup 2026-03-11 00:00:17.615406 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2026-03-11 00:00:19.783236 | 2026-03-11 00:00:19.784427 | PLAY [Base pre] 2026-03-11 00:00:19.808924 | 2026-03-11 00:00:19.809082 | TASK [Setup log path fact] 2026-03-11 00:00:19.846493 | orchestrator | ok 2026-03-11 00:00:19.878179 | 2026-03-11 00:00:19.878318 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-11 00:00:19.995035 | orchestrator | ok 2026-03-11 00:00:20.037209 | 2026-03-11 00:00:20.037333 | TASK [emit-job-header : Print job information] 2026-03-11 00:00:20.128224 | # Job Information 2026-03-11 00:00:20.128393 | Ansible Version: 2.16.14 2026-03-11 00:00:20.128430 | Job: testbed-deploy-current-in-a-nutshell-with-tempest-ubuntu-24.04 2026-03-11 00:00:20.128466 | Pipeline: periodic-midnight 2026-03-11 00:00:20.128490 | Executor: 521e9411259a 2026-03-11 00:00:20.128525 | Triggered by: https://github.com/osism/testbed 2026-03-11 00:00:20.128552 | Event ID: 8eb8c26edea145abae90a2d897835dd2 2026-03-11 00:00:20.152581 | 2026-03-11 00:00:20.152716 | LOOP [emit-job-header : Print node information] 2026-03-11 00:00:20.599023 | orchestrator | ok: 2026-03-11 00:00:20.599372 | orchestrator | # Node Information 2026-03-11 00:00:20.599428 | orchestrator | Inventory Hostname: orchestrator 2026-03-11 00:00:20.599455 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2026-03-11 00:00:20.599478 | orchestrator | Username: zuul-testbed03 2026-03-11 00:00:20.599498 | orchestrator | Distro: Debian 12.13 2026-03-11 00:00:20.599533 | orchestrator | Provider: static-testbed 2026-03-11 00:00:20.599554 | orchestrator | Region: 2026-03-11 00:00:20.599584 | orchestrator | Label: testbed-orchestrator 2026-03-11 00:00:20.600001 | orchestrator | Product Name: OpenStack Nova 2026-03-11 00:00:20.600061 | orchestrator | Interface IP: 81.163.193.140 2026-03-11 00:00:20.633017 | 2026-03-11 00:00:20.637987 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2026-03-11 00:00:21.911316 | orchestrator -> localhost | changed 2026-03-11 00:00:21.923523 | 2026-03-11 00:00:21.923634 | TASK [log-inventory : Copy ansible inventory to logs dir] 2026-03-11 00:00:24.265084 | orchestrator -> localhost | changed 2026-03-11 00:00:24.295108 | 2026-03-11 00:00:24.295214 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2026-03-11 00:00:24.855998 | orchestrator -> localhost | ok 2026-03-11 00:00:24.861916 | 2026-03-11 00:00:24.862010 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2026-03-11 00:00:24.899516 | orchestrator | ok 2026-03-11 00:00:24.925930 | orchestrator | included: /var/lib/zuul/builds/60b46b9ceeea47e7bd8f6c4f3c34d8fb/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2026-03-11 00:00:24.961585 | 2026-03-11 00:00:24.961692 | TASK [add-build-sshkey : Create Temp SSH key] 2026-03-11 00:00:26.975867 | orchestrator -> localhost | Generating public/private rsa key pair. 2026-03-11 00:00:26.976028 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/60b46b9ceeea47e7bd8f6c4f3c34d8fb/work/60b46b9ceeea47e7bd8f6c4f3c34d8fb_id_rsa 2026-03-11 00:00:26.976059 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/60b46b9ceeea47e7bd8f6c4f3c34d8fb/work/60b46b9ceeea47e7bd8f6c4f3c34d8fb_id_rsa.pub 2026-03-11 00:00:26.976081 | orchestrator -> localhost | The key fingerprint is: 2026-03-11 00:00:26.976101 | orchestrator -> localhost | SHA256:KFqJ+G9DKZo0jQY0D9VIEdX5mvME7ddxkLXL/LJtMkQ zuul-build-sshkey 2026-03-11 00:00:26.976120 | orchestrator -> localhost | The key's randomart image is: 2026-03-11 00:00:26.976146 | orchestrator -> localhost | +---[RSA 3072]----+ 2026-03-11 00:00:26.976164 | orchestrator -> localhost | | o=*.. . o. | 2026-03-11 00:00:26.976183 | orchestrator -> localhost | | + . . o o .| 2026-03-11 00:00:26.976199 | orchestrator -> localhost | |. + o .. | 2026-03-11 00:00:26.976215 | orchestrator -> localhost | |.. o . o o .E..| 2026-03-11 00:00:26.976232 | orchestrator -> localhost | |o + +.. S ..o+ | 2026-03-11 00:00:26.976253 | orchestrator -> localhost | | *.+o. + o . .. .| 2026-03-11 00:00:26.976270 | orchestrator -> localhost | |oo+o + . .. .| 2026-03-11 00:00:26.976286 | orchestrator -> localhost | |o .o . o+.| 2026-03-11 00:00:26.976303 | orchestrator -> localhost | | ... .+.| 2026-03-11 00:00:26.976320 | orchestrator -> localhost | +----[SHA256]-----+ 2026-03-11 00:00:26.976366 | orchestrator -> localhost | ok: Runtime: 0:00:01.021781 2026-03-11 00:00:26.982169 | 2026-03-11 00:00:26.982247 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2026-03-11 00:00:27.050356 | orchestrator | ok 2026-03-11 00:00:27.062268 | orchestrator | included: /var/lib/zuul/builds/60b46b9ceeea47e7bd8f6c4f3c34d8fb/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2026-03-11 00:00:27.103648 | 2026-03-11 00:00:27.103777 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2026-03-11 00:00:27.149270 | orchestrator | skipping: Conditional result was False 2026-03-11 00:00:27.160973 | 2026-03-11 00:00:27.161062 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2026-03-11 00:00:28.274470 | orchestrator | changed 2026-03-11 00:00:28.291455 | 2026-03-11 00:00:28.291558 | TASK [add-build-sshkey : Make sure user has a .ssh] 2026-03-11 00:00:28.667803 | orchestrator | ok 2026-03-11 00:00:28.677051 | 2026-03-11 00:00:28.677177 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2026-03-11 00:00:29.243299 | orchestrator | ok 2026-03-11 00:00:29.252112 | 2026-03-11 00:00:29.252214 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2026-03-11 00:00:29.789060 | orchestrator | ok 2026-03-11 00:00:29.794186 | 2026-03-11 00:00:29.794268 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2026-03-11 00:00:29.868345 | orchestrator | skipping: Conditional result was False 2026-03-11 00:00:29.875491 | 2026-03-11 00:00:29.875576 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2026-03-11 00:00:31.625252 | orchestrator -> localhost | changed 2026-03-11 00:00:31.646375 | 2026-03-11 00:00:31.646473 | TASK [add-build-sshkey : Add back temp key] 2026-03-11 00:00:32.575219 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/60b46b9ceeea47e7bd8f6c4f3c34d8fb/work/60b46b9ceeea47e7bd8f6c4f3c34d8fb_id_rsa (zuul-build-sshkey) 2026-03-11 00:00:32.575398 | orchestrator -> localhost | ok: Runtime: 0:00:00.052591 2026-03-11 00:00:32.581176 | 2026-03-11 00:00:32.581254 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2026-03-11 00:00:33.403069 | orchestrator | ok 2026-03-11 00:00:33.418291 | 2026-03-11 00:00:33.418403 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2026-03-11 00:00:33.455311 | orchestrator | skipping: Conditional result was False 2026-03-11 00:00:33.527282 | 2026-03-11 00:00:33.527391 | TASK [start-zuul-console : Start zuul_console daemon.] 2026-03-11 00:00:34.034112 | orchestrator | ok 2026-03-11 00:00:34.047604 | 2026-03-11 00:00:34.047711 | TASK [validate-host : Define zuul_info_dir fact] 2026-03-11 00:00:34.080637 | orchestrator | ok 2026-03-11 00:00:34.086514 | 2026-03-11 00:00:34.086596 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2026-03-11 00:00:35.219854 | orchestrator -> localhost | ok 2026-03-11 00:00:35.226763 | 2026-03-11 00:00:35.226874 | TASK [validate-host : Collect information about the host] 2026-03-11 00:00:36.904590 | orchestrator | ok 2026-03-11 00:00:36.930177 | 2026-03-11 00:00:36.930292 | TASK [validate-host : Sanitize hostname] 2026-03-11 00:00:37.039427 | orchestrator | ok 2026-03-11 00:00:37.049219 | 2026-03-11 00:00:37.049329 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2026-03-11 00:00:38.801998 | orchestrator -> localhost | changed 2026-03-11 00:00:38.807113 | 2026-03-11 00:00:38.807206 | TASK [validate-host : Collect information about zuul worker] 2026-03-11 00:00:39.439222 | orchestrator | ok 2026-03-11 00:00:39.444653 | 2026-03-11 00:00:39.444780 | TASK [validate-host : Write out all zuul information for each host] 2026-03-11 00:00:40.360124 | orchestrator -> localhost | changed 2026-03-11 00:00:40.373576 | 2026-03-11 00:00:40.373682 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2026-03-11 00:00:40.704829 | orchestrator | ok 2026-03-11 00:00:40.717388 | 2026-03-11 00:00:40.717501 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2026-03-11 00:02:00.499186 | orchestrator | changed: 2026-03-11 00:02:00.499428 | orchestrator | .d..t...... src/ 2026-03-11 00:02:00.499462 | orchestrator | .d..t...... src/github.com/ 2026-03-11 00:02:00.499487 | orchestrator | .d..t...... src/github.com/osism/ 2026-03-11 00:02:00.499508 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2026-03-11 00:02:00.499529 | orchestrator | RedHat.yml 2026-03-11 00:02:00.516267 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2026-03-11 00:02:00.516284 | orchestrator | RedHat.yml 2026-03-11 00:02:00.516335 | orchestrator | = 1.53.0"... 2026-03-11 00:02:14.241884 | orchestrator | - Finding hashicorp/local versions matching ">= 2.2.0"... 2026-03-11 00:02:14.733581 | orchestrator | - Installing hashicorp/null v3.2.4... 2026-03-11 00:02:15.579283 | orchestrator | - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2026-03-11 00:02:15.672962 | orchestrator | - Installing terraform-provider-openstack/openstack v3.4.0... 2026-03-11 00:02:16.334127 | orchestrator | - Installed terraform-provider-openstack/openstack v3.4.0 (signed, key ID 4F80527A391BEFD2) 2026-03-11 00:02:16.721259 | orchestrator | - Installing hashicorp/local v2.7.0... 2026-03-11 00:02:17.334416 | orchestrator | - Installed hashicorp/local v2.7.0 (signed, key ID 0C0AF313E5FD9F80) 2026-03-11 00:02:17.334526 | orchestrator | 2026-03-11 00:02:17.334544 | orchestrator | Providers are signed by their developers. 2026-03-11 00:02:17.334559 | orchestrator | If you'd like to know more about provider signing, you can read about it here: 2026-03-11 00:02:17.334571 | orchestrator | https://opentofu.org/docs/cli/plugins/signing/ 2026-03-11 00:02:17.334608 | orchestrator | 2026-03-11 00:02:17.334622 | orchestrator | OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2026-03-11 00:02:17.334669 | orchestrator | selections it made above. Include this file in your version control repository 2026-03-11 00:02:17.334720 | orchestrator | so that OpenTofu can guarantee to make the same selections by default when 2026-03-11 00:02:17.334735 | orchestrator | you run "tofu init" in the future. 2026-03-11 00:02:17.334874 | orchestrator | 2026-03-11 00:02:17.334890 | orchestrator | OpenTofu has been successfully initialized! 2026-03-11 00:02:17.334902 | orchestrator | 2026-03-11 00:02:17.334913 | orchestrator | You may now begin working with OpenTofu. Try running "tofu plan" to see 2026-03-11 00:02:17.334925 | orchestrator | any changes that are required for your infrastructure. All OpenTofu commands 2026-03-11 00:02:17.334936 | orchestrator | should now work. 2026-03-11 00:02:17.334954 | orchestrator | 2026-03-11 00:02:17.334965 | orchestrator | If you ever set or change modules or backend configuration for OpenTofu, 2026-03-11 00:02:17.334977 | orchestrator | rerun this command to reinitialize your working directory. If you forget, other 2026-03-11 00:02:17.334989 | orchestrator | commands will detect it and remind you to do so if necessary. 2026-03-11 00:02:17.493440 | orchestrator | Created and switched to workspace "ci"! 2026-03-11 00:02:17.493529 | orchestrator | 2026-03-11 00:02:17.493544 | orchestrator | You're now on a new, empty workspace. Workspaces isolate their state, 2026-03-11 00:02:17.493557 | orchestrator | so if you run "tofu plan" OpenTofu will not see any existing state 2026-03-11 00:02:17.493569 | orchestrator | for this configuration. 2026-03-11 00:02:17.609881 | orchestrator | ci.auto.tfvars 2026-03-11 00:02:17.612583 | orchestrator | default_custom.tf 2026-03-11 00:02:18.531990 | orchestrator | data.openstack_networking_network_v2.public: Reading... 2026-03-11 00:02:19.148297 | orchestrator | data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2026-03-11 00:02:19.802191 | orchestrator | 2026-03-11 00:02:19.803664 | orchestrator | OpenTofu used the selected providers to generate the following execution 2026-03-11 00:02:19.803681 | orchestrator | plan. Resource actions are indicated with the following symbols: 2026-03-11 00:02:19.803687 | orchestrator | + create 2026-03-11 00:02:19.803691 | orchestrator | <= read (data resources) 2026-03-11 00:02:19.803696 | orchestrator | 2026-03-11 00:02:19.803701 | orchestrator | OpenTofu will perform the following actions: 2026-03-11 00:02:19.803705 | orchestrator | 2026-03-11 00:02:19.803709 | orchestrator | # data.openstack_images_image_v2.image will be read during apply 2026-03-11 00:02:19.803714 | orchestrator | # (config refers to values not yet known) 2026-03-11 00:02:19.803718 | orchestrator | <= data "openstack_images_image_v2" "image" { 2026-03-11 00:02:19.803722 | orchestrator | + checksum = (known after apply) 2026-03-11 00:02:19.803726 | orchestrator | + created_at = (known after apply) 2026-03-11 00:02:19.803730 | orchestrator | + file = (known after apply) 2026-03-11 00:02:19.803734 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.803751 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.803755 | orchestrator | + min_disk_gb = (known after apply) 2026-03-11 00:02:19.803759 | orchestrator | + min_ram_mb = (known after apply) 2026-03-11 00:02:19.803763 | orchestrator | + most_recent = true 2026-03-11 00:02:19.803767 | orchestrator | + name = (known after apply) 2026-03-11 00:02:19.803771 | orchestrator | + protected = (known after apply) 2026-03-11 00:02:19.803775 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.803781 | orchestrator | + schema = (known after apply) 2026-03-11 00:02:19.803785 | orchestrator | + size_bytes = (known after apply) 2026-03-11 00:02:19.803789 | orchestrator | + tags = (known after apply) 2026-03-11 00:02:19.803792 | orchestrator | + updated_at = (known after apply) 2026-03-11 00:02:19.803796 | orchestrator | } 2026-03-11 00:02:19.803800 | orchestrator | 2026-03-11 00:02:19.803804 | orchestrator | # data.openstack_images_image_v2.image_node will be read during apply 2026-03-11 00:02:19.803808 | orchestrator | # (config refers to values not yet known) 2026-03-11 00:02:19.803812 | orchestrator | <= data "openstack_images_image_v2" "image_node" { 2026-03-11 00:02:19.803816 | orchestrator | + checksum = (known after apply) 2026-03-11 00:02:19.803820 | orchestrator | + created_at = (known after apply) 2026-03-11 00:02:19.803824 | orchestrator | + file = (known after apply) 2026-03-11 00:02:19.803828 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.803832 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.803836 | orchestrator | + min_disk_gb = (known after apply) 2026-03-11 00:02:19.803839 | orchestrator | + min_ram_mb = (known after apply) 2026-03-11 00:02:19.803843 | orchestrator | + most_recent = true 2026-03-11 00:02:19.803847 | orchestrator | + name = (known after apply) 2026-03-11 00:02:19.803851 | orchestrator | + protected = (known after apply) 2026-03-11 00:02:19.803854 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.803858 | orchestrator | + schema = (known after apply) 2026-03-11 00:02:19.803862 | orchestrator | + size_bytes = (known after apply) 2026-03-11 00:02:19.803865 | orchestrator | + tags = (known after apply) 2026-03-11 00:02:19.803869 | orchestrator | + updated_at = (known after apply) 2026-03-11 00:02:19.803873 | orchestrator | } 2026-03-11 00:02:19.803877 | orchestrator | 2026-03-11 00:02:19.803880 | orchestrator | # local_file.MANAGER_ADDRESS will be created 2026-03-11 00:02:19.803884 | orchestrator | + resource "local_file" "MANAGER_ADDRESS" { 2026-03-11 00:02:19.803888 | orchestrator | + content = (known after apply) 2026-03-11 00:02:19.803892 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-11 00:02:19.803896 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-11 00:02:19.803900 | orchestrator | + content_md5 = (known after apply) 2026-03-11 00:02:19.803903 | orchestrator | + content_sha1 = (known after apply) 2026-03-11 00:02:19.803907 | orchestrator | + content_sha256 = (known after apply) 2026-03-11 00:02:19.803911 | orchestrator | + content_sha512 = (known after apply) 2026-03-11 00:02:19.803915 | orchestrator | + directory_permission = "0777" 2026-03-11 00:02:19.803919 | orchestrator | + file_permission = "0644" 2026-03-11 00:02:19.803922 | orchestrator | + filename = ".MANAGER_ADDRESS.ci" 2026-03-11 00:02:19.803926 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.803930 | orchestrator | } 2026-03-11 00:02:19.803934 | orchestrator | 2026-03-11 00:02:19.803937 | orchestrator | # local_file.id_rsa_pub will be created 2026-03-11 00:02:19.803941 | orchestrator | + resource "local_file" "id_rsa_pub" { 2026-03-11 00:02:19.803945 | orchestrator | + content = (known after apply) 2026-03-11 00:02:19.803949 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-11 00:02:19.803953 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-11 00:02:19.803956 | orchestrator | + content_md5 = (known after apply) 2026-03-11 00:02:19.803960 | orchestrator | + content_sha1 = (known after apply) 2026-03-11 00:02:19.803964 | orchestrator | + content_sha256 = (known after apply) 2026-03-11 00:02:19.803968 | orchestrator | + content_sha512 = (known after apply) 2026-03-11 00:02:19.803971 | orchestrator | + directory_permission = "0777" 2026-03-11 00:02:19.803975 | orchestrator | + file_permission = "0644" 2026-03-11 00:02:19.803983 | orchestrator | + filename = ".id_rsa.ci.pub" 2026-03-11 00:02:19.803986 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.803990 | orchestrator | } 2026-03-11 00:02:19.803994 | orchestrator | 2026-03-11 00:02:19.804003 | orchestrator | # local_file.inventory will be created 2026-03-11 00:02:19.804007 | orchestrator | + resource "local_file" "inventory" { 2026-03-11 00:02:19.804011 | orchestrator | + content = (known after apply) 2026-03-11 00:02:19.804014 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-11 00:02:19.804018 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-11 00:02:19.804022 | orchestrator | + content_md5 = (known after apply) 2026-03-11 00:02:19.804026 | orchestrator | + content_sha1 = (known after apply) 2026-03-11 00:02:19.804030 | orchestrator | + content_sha256 = (known after apply) 2026-03-11 00:02:19.804033 | orchestrator | + content_sha512 = (known after apply) 2026-03-11 00:02:19.804037 | orchestrator | + directory_permission = "0777" 2026-03-11 00:02:19.804041 | orchestrator | + file_permission = "0644" 2026-03-11 00:02:19.804045 | orchestrator | + filename = "inventory.ci" 2026-03-11 00:02:19.804048 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804052 | orchestrator | } 2026-03-11 00:02:19.804056 | orchestrator | 2026-03-11 00:02:19.804060 | orchestrator | # local_sensitive_file.id_rsa will be created 2026-03-11 00:02:19.804064 | orchestrator | + resource "local_sensitive_file" "id_rsa" { 2026-03-11 00:02:19.804067 | orchestrator | + content = (sensitive value) 2026-03-11 00:02:19.804071 | orchestrator | + content_base64sha256 = (known after apply) 2026-03-11 00:02:19.804075 | orchestrator | + content_base64sha512 = (known after apply) 2026-03-11 00:02:19.804079 | orchestrator | + content_md5 = (known after apply) 2026-03-11 00:02:19.804083 | orchestrator | + content_sha1 = (known after apply) 2026-03-11 00:02:19.804086 | orchestrator | + content_sha256 = (known after apply) 2026-03-11 00:02:19.804099 | orchestrator | + content_sha512 = (known after apply) 2026-03-11 00:02:19.804103 | orchestrator | + directory_permission = "0700" 2026-03-11 00:02:19.804107 | orchestrator | + file_permission = "0600" 2026-03-11 00:02:19.804111 | orchestrator | + filename = ".id_rsa.ci" 2026-03-11 00:02:19.804115 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804118 | orchestrator | } 2026-03-11 00:02:19.804122 | orchestrator | 2026-03-11 00:02:19.804126 | orchestrator | # null_resource.node_semaphore will be created 2026-03-11 00:02:19.804129 | orchestrator | + resource "null_resource" "node_semaphore" { 2026-03-11 00:02:19.804133 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804137 | orchestrator | } 2026-03-11 00:02:19.804141 | orchestrator | 2026-03-11 00:02:19.804145 | orchestrator | # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2026-03-11 00:02:19.804149 | orchestrator | + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2026-03-11 00:02:19.804152 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804156 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804160 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804163 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.804167 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804171 | orchestrator | + name = "testbed-volume-manager-base" 2026-03-11 00:02:19.804175 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804179 | orchestrator | + size = 80 2026-03-11 00:02:19.804182 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804186 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804190 | orchestrator | } 2026-03-11 00:02:19.804194 | orchestrator | 2026-03-11 00:02:19.804197 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2026-03-11 00:02:19.804201 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-11 00:02:19.804205 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804209 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804213 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804220 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.804223 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804227 | orchestrator | + name = "testbed-volume-0-node-base" 2026-03-11 00:02:19.804231 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804235 | orchestrator | + size = 80 2026-03-11 00:02:19.804239 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804242 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804246 | orchestrator | } 2026-03-11 00:02:19.804250 | orchestrator | 2026-03-11 00:02:19.804254 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2026-03-11 00:02:19.804257 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-11 00:02:19.804261 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804265 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804268 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804272 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.804276 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804280 | orchestrator | + name = "testbed-volume-1-node-base" 2026-03-11 00:02:19.804284 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804287 | orchestrator | + size = 80 2026-03-11 00:02:19.804291 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804295 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804299 | orchestrator | } 2026-03-11 00:02:19.804302 | orchestrator | 2026-03-11 00:02:19.804306 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2026-03-11 00:02:19.804310 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-11 00:02:19.804313 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804317 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804321 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804325 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.804328 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804332 | orchestrator | + name = "testbed-volume-2-node-base" 2026-03-11 00:02:19.804336 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804339 | orchestrator | + size = 80 2026-03-11 00:02:19.804343 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804347 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804351 | orchestrator | } 2026-03-11 00:02:19.804355 | orchestrator | 2026-03-11 00:02:19.804358 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2026-03-11 00:02:19.804362 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-11 00:02:19.804366 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804370 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804374 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804377 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.804381 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804387 | orchestrator | + name = "testbed-volume-3-node-base" 2026-03-11 00:02:19.804391 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804395 | orchestrator | + size = 80 2026-03-11 00:02:19.804404 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804408 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804411 | orchestrator | } 2026-03-11 00:02:19.804415 | orchestrator | 2026-03-11 00:02:19.804419 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2026-03-11 00:02:19.804423 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-11 00:02:19.804426 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804430 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804434 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804441 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.804445 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804448 | orchestrator | + name = "testbed-volume-4-node-base" 2026-03-11 00:02:19.804452 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804456 | orchestrator | + size = 80 2026-03-11 00:02:19.804460 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804463 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804467 | orchestrator | } 2026-03-11 00:02:19.804471 | orchestrator | 2026-03-11 00:02:19.804475 | orchestrator | # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2026-03-11 00:02:19.804481 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2026-03-11 00:02:19.804485 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804489 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804493 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804496 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.804500 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804504 | orchestrator | + name = "testbed-volume-5-node-base" 2026-03-11 00:02:19.804508 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804512 | orchestrator | + size = 80 2026-03-11 00:02:19.804515 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804519 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804523 | orchestrator | } 2026-03-11 00:02:19.804527 | orchestrator | 2026-03-11 00:02:19.804530 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[0] will be created 2026-03-11 00:02:19.804534 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:19.804538 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804542 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804545 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804549 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804553 | orchestrator | + name = "testbed-volume-0-node-3" 2026-03-11 00:02:19.804557 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804561 | orchestrator | + size = 20 2026-03-11 00:02:19.804565 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804568 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804572 | orchestrator | } 2026-03-11 00:02:19.804576 | orchestrator | 2026-03-11 00:02:19.804580 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[1] will be created 2026-03-11 00:02:19.804583 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:19.804587 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804591 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804595 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804598 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804602 | orchestrator | + name = "testbed-volume-1-node-4" 2026-03-11 00:02:19.804606 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804610 | orchestrator | + size = 20 2026-03-11 00:02:19.804613 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804617 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804621 | orchestrator | } 2026-03-11 00:02:19.804642 | orchestrator | 2026-03-11 00:02:19.804647 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[2] will be created 2026-03-11 00:02:19.804650 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:19.804654 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804658 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804662 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804666 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804669 | orchestrator | + name = "testbed-volume-2-node-5" 2026-03-11 00:02:19.804673 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804680 | orchestrator | + size = 20 2026-03-11 00:02:19.804684 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804687 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804691 | orchestrator | } 2026-03-11 00:02:19.804695 | orchestrator | 2026-03-11 00:02:19.804699 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[3] will be created 2026-03-11 00:02:19.804703 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:19.804706 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804710 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804714 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804718 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804722 | orchestrator | + name = "testbed-volume-3-node-3" 2026-03-11 00:02:19.804725 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804729 | orchestrator | + size = 20 2026-03-11 00:02:19.804733 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804737 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804741 | orchestrator | } 2026-03-11 00:02:19.804744 | orchestrator | 2026-03-11 00:02:19.804748 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[4] will be created 2026-03-11 00:02:19.804752 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:19.804756 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804759 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804763 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804767 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804771 | orchestrator | + name = "testbed-volume-4-node-4" 2026-03-11 00:02:19.804775 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804781 | orchestrator | + size = 20 2026-03-11 00:02:19.804785 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804788 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804792 | orchestrator | } 2026-03-11 00:02:19.804796 | orchestrator | 2026-03-11 00:02:19.804800 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[5] will be created 2026-03-11 00:02:19.804804 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:19.804807 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804811 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804815 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804819 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804822 | orchestrator | + name = "testbed-volume-5-node-5" 2026-03-11 00:02:19.804826 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804830 | orchestrator | + size = 20 2026-03-11 00:02:19.804834 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804838 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804841 | orchestrator | } 2026-03-11 00:02:19.804845 | orchestrator | 2026-03-11 00:02:19.804849 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[6] will be created 2026-03-11 00:02:19.804853 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:19.804857 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804860 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804864 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804871 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804874 | orchestrator | + name = "testbed-volume-6-node-3" 2026-03-11 00:02:19.804878 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804882 | orchestrator | + size = 20 2026-03-11 00:02:19.804886 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804889 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804893 | orchestrator | } 2026-03-11 00:02:19.804897 | orchestrator | 2026-03-11 00:02:19.804901 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[7] will be created 2026-03-11 00:02:19.804904 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:19.804913 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804916 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804920 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804924 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804927 | orchestrator | + name = "testbed-volume-7-node-4" 2026-03-11 00:02:19.804931 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804935 | orchestrator | + size = 20 2026-03-11 00:02:19.804939 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804942 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804946 | orchestrator | } 2026-03-11 00:02:19.804950 | orchestrator | 2026-03-11 00:02:19.804954 | orchestrator | # openstack_blockstorage_volume_v3.node_volume[8] will be created 2026-03-11 00:02:19.804958 | orchestrator | + resource "openstack_blockstorage_volume_v3" "node_volume" { 2026-03-11 00:02:19.804961 | orchestrator | + attachment = (known after apply) 2026-03-11 00:02:19.804965 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.804969 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.804972 | orchestrator | + metadata = (known after apply) 2026-03-11 00:02:19.804976 | orchestrator | + name = "testbed-volume-8-node-5" 2026-03-11 00:02:19.804980 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.804984 | orchestrator | + size = 20 2026-03-11 00:02:19.804987 | orchestrator | + volume_retype_policy = "never" 2026-03-11 00:02:19.804991 | orchestrator | + volume_type = "ssd" 2026-03-11 00:02:19.804995 | orchestrator | } 2026-03-11 00:02:19.804999 | orchestrator | 2026-03-11 00:02:19.805002 | orchestrator | # openstack_compute_instance_v2.manager_server will be created 2026-03-11 00:02:19.805006 | orchestrator | + resource "openstack_compute_instance_v2" "manager_server" { 2026-03-11 00:02:19.805010 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:19.805014 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:19.805017 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:19.805021 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.805025 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.805029 | orchestrator | + config_drive = true 2026-03-11 00:02:19.805032 | orchestrator | + created = (known after apply) 2026-03-11 00:02:19.805036 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:19.805040 | orchestrator | + flavor_name = "OSISM-4V-16" 2026-03-11 00:02:19.805043 | orchestrator | + force_delete = false 2026-03-11 00:02:19.805047 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:19.805051 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.805055 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.805058 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:19.805062 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:19.805066 | orchestrator | + name = "testbed-manager" 2026-03-11 00:02:19.805070 | orchestrator | + power_state = "active" 2026-03-11 00:02:19.805073 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.805077 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:19.805081 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:19.805084 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:19.805088 | orchestrator | + user_data = (sensitive value) 2026-03-11 00:02:19.805092 | orchestrator | 2026-03-11 00:02:19.805096 | orchestrator | + block_device { 2026-03-11 00:02:19.805100 | orchestrator | + boot_index = 0 2026-03-11 00:02:19.805104 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:19.805110 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:19.805114 | orchestrator | + multiattach = false 2026-03-11 00:02:19.805117 | orchestrator | + source_type = "volume" 2026-03-11 00:02:19.805121 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.805128 | orchestrator | } 2026-03-11 00:02:19.805132 | orchestrator | 2026-03-11 00:02:19.805136 | orchestrator | + network { 2026-03-11 00:02:19.805140 | orchestrator | + access_network = false 2026-03-11 00:02:19.805143 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:19.805147 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:19.805151 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:19.805154 | orchestrator | + name = (known after apply) 2026-03-11 00:02:19.805158 | orchestrator | + port = (known after apply) 2026-03-11 00:02:19.805162 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.805166 | orchestrator | } 2026-03-11 00:02:19.805169 | orchestrator | } 2026-03-11 00:02:19.805173 | orchestrator | 2026-03-11 00:02:19.805177 | orchestrator | # openstack_compute_instance_v2.node_server[0] will be created 2026-03-11 00:02:19.805181 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-11 00:02:19.805184 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:19.805188 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:19.805192 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:19.805196 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.805199 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.805203 | orchestrator | + config_drive = true 2026-03-11 00:02:19.805207 | orchestrator | + created = (known after apply) 2026-03-11 00:02:19.805210 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:19.805214 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-11 00:02:19.805218 | orchestrator | + force_delete = false 2026-03-11 00:02:19.805221 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:19.805225 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.805229 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.805233 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:19.805237 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:19.805240 | orchestrator | + name = "testbed-node-0" 2026-03-11 00:02:19.805244 | orchestrator | + power_state = "active" 2026-03-11 00:02:19.805250 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.805254 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:19.805258 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:19.805262 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:19.805265 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-11 00:02:19.805269 | orchestrator | 2026-03-11 00:02:19.805273 | orchestrator | + block_device { 2026-03-11 00:02:19.805277 | orchestrator | + boot_index = 0 2026-03-11 00:02:19.805281 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:19.805284 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:19.805288 | orchestrator | + multiattach = false 2026-03-11 00:02:19.805292 | orchestrator | + source_type = "volume" 2026-03-11 00:02:19.805296 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.805299 | orchestrator | } 2026-03-11 00:02:19.805303 | orchestrator | 2026-03-11 00:02:19.805307 | orchestrator | + network { 2026-03-11 00:02:19.805311 | orchestrator | + access_network = false 2026-03-11 00:02:19.805314 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:19.805318 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:19.805322 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:19.805326 | orchestrator | + name = (known after apply) 2026-03-11 00:02:19.805329 | orchestrator | + port = (known after apply) 2026-03-11 00:02:19.805333 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.805337 | orchestrator | } 2026-03-11 00:02:19.805340 | orchestrator | } 2026-03-11 00:02:19.805344 | orchestrator | 2026-03-11 00:02:19.805348 | orchestrator | # openstack_compute_instance_v2.node_server[1] will be created 2026-03-11 00:02:19.805352 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-11 00:02:19.805356 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:19.805362 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:19.805366 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:19.805370 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.805374 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.805377 | orchestrator | + config_drive = true 2026-03-11 00:02:19.805381 | orchestrator | + created = (known after apply) 2026-03-11 00:02:19.805385 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:19.805388 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-11 00:02:19.805392 | orchestrator | + force_delete = false 2026-03-11 00:02:19.805396 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:19.805400 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.805403 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.805407 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:19.805411 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:19.805415 | orchestrator | + name = "testbed-node-1" 2026-03-11 00:02:19.805418 | orchestrator | + power_state = "active" 2026-03-11 00:02:19.805422 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.805426 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:19.805429 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:19.805433 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:19.805437 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-11 00:02:19.805441 | orchestrator | 2026-03-11 00:02:19.805444 | orchestrator | + block_device { 2026-03-11 00:02:19.805448 | orchestrator | + boot_index = 0 2026-03-11 00:02:19.805452 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:19.805456 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:19.805459 | orchestrator | + multiattach = false 2026-03-11 00:02:19.805463 | orchestrator | + source_type = "volume" 2026-03-11 00:02:19.805467 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.805471 | orchestrator | } 2026-03-11 00:02:19.805474 | orchestrator | 2026-03-11 00:02:19.805478 | orchestrator | + network { 2026-03-11 00:02:19.805482 | orchestrator | + access_network = false 2026-03-11 00:02:19.805486 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:19.805489 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:19.805493 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:19.805497 | orchestrator | + name = (known after apply) 2026-03-11 00:02:19.805500 | orchestrator | + port = (known after apply) 2026-03-11 00:02:19.805504 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.805508 | orchestrator | } 2026-03-11 00:02:19.805512 | orchestrator | } 2026-03-11 00:02:19.805515 | orchestrator | 2026-03-11 00:02:19.805519 | orchestrator | # openstack_compute_instance_v2.node_server[2] will be created 2026-03-11 00:02:19.805523 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-11 00:02:19.805527 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:19.805530 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:19.805534 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:19.805538 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.805547 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.805550 | orchestrator | + config_drive = true 2026-03-11 00:02:19.805554 | orchestrator | + created = (known after apply) 2026-03-11 00:02:19.805558 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:19.805562 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-11 00:02:19.805565 | orchestrator | + force_delete = false 2026-03-11 00:02:19.805569 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:19.805573 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.805577 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.805584 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:19.805587 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:19.805591 | orchestrator | + name = "testbed-node-2" 2026-03-11 00:02:19.805595 | orchestrator | + power_state = "active" 2026-03-11 00:02:19.805598 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.805602 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:19.805606 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:19.805610 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:19.805613 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-11 00:02:19.805617 | orchestrator | 2026-03-11 00:02:19.805621 | orchestrator | + block_device { 2026-03-11 00:02:19.805638 | orchestrator | + boot_index = 0 2026-03-11 00:02:19.805642 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:19.805646 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:19.805652 | orchestrator | + multiattach = false 2026-03-11 00:02:19.805656 | orchestrator | + source_type = "volume" 2026-03-11 00:02:19.805660 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.805663 | orchestrator | } 2026-03-11 00:02:19.805667 | orchestrator | 2026-03-11 00:02:19.805671 | orchestrator | + network { 2026-03-11 00:02:19.805675 | orchestrator | + access_network = false 2026-03-11 00:02:19.805679 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:19.805682 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:19.805686 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:19.805690 | orchestrator | + name = (known after apply) 2026-03-11 00:02:19.805694 | orchestrator | + port = (known after apply) 2026-03-11 00:02:19.805697 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.805701 | orchestrator | } 2026-03-11 00:02:19.805705 | orchestrator | } 2026-03-11 00:02:19.805709 | orchestrator | 2026-03-11 00:02:19.805713 | orchestrator | # openstack_compute_instance_v2.node_server[3] will be created 2026-03-11 00:02:19.805716 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-11 00:02:19.805720 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:19.805724 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:19.805728 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:19.805732 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.805735 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.805739 | orchestrator | + config_drive = true 2026-03-11 00:02:19.805743 | orchestrator | + created = (known after apply) 2026-03-11 00:02:19.805747 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:19.805750 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-11 00:02:19.805754 | orchestrator | + force_delete = false 2026-03-11 00:02:19.805758 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:19.805762 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.805766 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.805769 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:19.805773 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:19.805777 | orchestrator | + name = "testbed-node-3" 2026-03-11 00:02:19.805781 | orchestrator | + power_state = "active" 2026-03-11 00:02:19.805784 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.805788 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:19.805792 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:19.805796 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:19.805799 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-11 00:02:19.805803 | orchestrator | 2026-03-11 00:02:19.805807 | orchestrator | + block_device { 2026-03-11 00:02:19.805814 | orchestrator | + boot_index = 0 2026-03-11 00:02:19.805818 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:19.805821 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:19.805828 | orchestrator | + multiattach = false 2026-03-11 00:02:19.805832 | orchestrator | + source_type = "volume" 2026-03-11 00:02:19.805836 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.805839 | orchestrator | } 2026-03-11 00:02:19.805843 | orchestrator | 2026-03-11 00:02:19.805847 | orchestrator | + network { 2026-03-11 00:02:19.805851 | orchestrator | + access_network = false 2026-03-11 00:02:19.805854 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:19.805858 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:19.805862 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:19.805866 | orchestrator | + name = (known after apply) 2026-03-11 00:02:19.805870 | orchestrator | + port = (known after apply) 2026-03-11 00:02:19.805873 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.805877 | orchestrator | } 2026-03-11 00:02:19.805881 | orchestrator | } 2026-03-11 00:02:19.805885 | orchestrator | 2026-03-11 00:02:19.805889 | orchestrator | # openstack_compute_instance_v2.node_server[4] will be created 2026-03-11 00:02:19.805892 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-11 00:02:19.805896 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:19.805900 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:19.805904 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:19.805908 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.805911 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.805915 | orchestrator | + config_drive = true 2026-03-11 00:02:19.805919 | orchestrator | + created = (known after apply) 2026-03-11 00:02:19.805922 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:19.805926 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-11 00:02:19.805930 | orchestrator | + force_delete = false 2026-03-11 00:02:19.805934 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:19.805938 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.805941 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.805945 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:19.805949 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:19.805953 | orchestrator | + name = "testbed-node-4" 2026-03-11 00:02:19.805956 | orchestrator | + power_state = "active" 2026-03-11 00:02:19.805960 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.807868 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:19.807874 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:19.807878 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:19.807881 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-11 00:02:19.807885 | orchestrator | 2026-03-11 00:02:19.807889 | orchestrator | + block_device { 2026-03-11 00:02:19.807893 | orchestrator | + boot_index = 0 2026-03-11 00:02:19.807897 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:19.807901 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:19.807904 | orchestrator | + multiattach = false 2026-03-11 00:02:19.807908 | orchestrator | + source_type = "volume" 2026-03-11 00:02:19.807912 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.807916 | orchestrator | } 2026-03-11 00:02:19.807919 | orchestrator | 2026-03-11 00:02:19.807923 | orchestrator | + network { 2026-03-11 00:02:19.807927 | orchestrator | + access_network = false 2026-03-11 00:02:19.807931 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:19.807935 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:19.807938 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:19.807942 | orchestrator | + name = (known after apply) 2026-03-11 00:02:19.807946 | orchestrator | + port = (known after apply) 2026-03-11 00:02:19.807954 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.807958 | orchestrator | } 2026-03-11 00:02:19.807962 | orchestrator | } 2026-03-11 00:02:19.807974 | orchestrator | 2026-03-11 00:02:19.807978 | orchestrator | # openstack_compute_instance_v2.node_server[5] will be created 2026-03-11 00:02:19.807982 | orchestrator | + resource "openstack_compute_instance_v2" "node_server" { 2026-03-11 00:02:19.807986 | orchestrator | + access_ip_v4 = (known after apply) 2026-03-11 00:02:19.807990 | orchestrator | + access_ip_v6 = (known after apply) 2026-03-11 00:02:19.807994 | orchestrator | + all_metadata = (known after apply) 2026-03-11 00:02:19.807998 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.808002 | orchestrator | + availability_zone = "nova" 2026-03-11 00:02:19.808005 | orchestrator | + config_drive = true 2026-03-11 00:02:19.808009 | orchestrator | + created = (known after apply) 2026-03-11 00:02:19.808013 | orchestrator | + flavor_id = (known after apply) 2026-03-11 00:02:19.808017 | orchestrator | + flavor_name = "OSISM-8V-32" 2026-03-11 00:02:19.808020 | orchestrator | + force_delete = false 2026-03-11 00:02:19.808027 | orchestrator | + hypervisor_hostname = (known after apply) 2026-03-11 00:02:19.808031 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808035 | orchestrator | + image_id = (known after apply) 2026-03-11 00:02:19.808039 | orchestrator | + image_name = (known after apply) 2026-03-11 00:02:19.808042 | orchestrator | + key_pair = "testbed" 2026-03-11 00:02:19.808046 | orchestrator | + name = "testbed-node-5" 2026-03-11 00:02:19.808050 | orchestrator | + power_state = "active" 2026-03-11 00:02:19.808054 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808058 | orchestrator | + security_groups = (known after apply) 2026-03-11 00:02:19.808061 | orchestrator | + stop_before_destroy = false 2026-03-11 00:02:19.808065 | orchestrator | + updated = (known after apply) 2026-03-11 00:02:19.808069 | orchestrator | + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2026-03-11 00:02:19.808073 | orchestrator | 2026-03-11 00:02:19.808076 | orchestrator | + block_device { 2026-03-11 00:02:19.808080 | orchestrator | + boot_index = 0 2026-03-11 00:02:19.808084 | orchestrator | + delete_on_termination = false 2026-03-11 00:02:19.808088 | orchestrator | + destination_type = "volume" 2026-03-11 00:02:19.808091 | orchestrator | + multiattach = false 2026-03-11 00:02:19.808095 | orchestrator | + source_type = "volume" 2026-03-11 00:02:19.808099 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.808102 | orchestrator | } 2026-03-11 00:02:19.808106 | orchestrator | 2026-03-11 00:02:19.808110 | orchestrator | + network { 2026-03-11 00:02:19.808114 | orchestrator | + access_network = false 2026-03-11 00:02:19.808117 | orchestrator | + fixed_ip_v4 = (known after apply) 2026-03-11 00:02:19.808121 | orchestrator | + fixed_ip_v6 = (known after apply) 2026-03-11 00:02:19.808125 | orchestrator | + mac = (known after apply) 2026-03-11 00:02:19.808128 | orchestrator | + name = (known after apply) 2026-03-11 00:02:19.808132 | orchestrator | + port = (known after apply) 2026-03-11 00:02:19.808136 | orchestrator | + uuid = (known after apply) 2026-03-11 00:02:19.808140 | orchestrator | } 2026-03-11 00:02:19.808143 | orchestrator | } 2026-03-11 00:02:19.808147 | orchestrator | 2026-03-11 00:02:19.808151 | orchestrator | # openstack_compute_keypair_v2.key will be created 2026-03-11 00:02:19.808154 | orchestrator | + resource "openstack_compute_keypair_v2" "key" { 2026-03-11 00:02:19.808158 | orchestrator | + fingerprint = (known after apply) 2026-03-11 00:02:19.808162 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808166 | orchestrator | + name = "testbed" 2026-03-11 00:02:19.808169 | orchestrator | + private_key = (sensitive value) 2026-03-11 00:02:19.808173 | orchestrator | + public_key = (known after apply) 2026-03-11 00:02:19.808177 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808181 | orchestrator | + user_id = (known after apply) 2026-03-11 00:02:19.808184 | orchestrator | } 2026-03-11 00:02:19.808188 | orchestrator | 2026-03-11 00:02:19.808192 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2026-03-11 00:02:19.808196 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:19.808202 | orchestrator | + device = (known after apply) 2026-03-11 00:02:19.808206 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808210 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:19.808214 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808218 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:19.808221 | orchestrator | } 2026-03-11 00:02:19.808225 | orchestrator | 2026-03-11 00:02:19.808229 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2026-03-11 00:02:19.808233 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:19.808237 | orchestrator | + device = (known after apply) 2026-03-11 00:02:19.808240 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808244 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:19.808248 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808251 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:19.808255 | orchestrator | } 2026-03-11 00:02:19.808259 | orchestrator | 2026-03-11 00:02:19.808263 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2026-03-11 00:02:19.808267 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:19.808270 | orchestrator | + device = (known after apply) 2026-03-11 00:02:19.808274 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808278 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:19.808281 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808285 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:19.808289 | orchestrator | } 2026-03-11 00:02:19.808293 | orchestrator | 2026-03-11 00:02:19.808296 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2026-03-11 00:02:19.808300 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:19.808304 | orchestrator | + device = (known after apply) 2026-03-11 00:02:19.808308 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808312 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:19.808316 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808320 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:19.808324 | orchestrator | } 2026-03-11 00:02:19.808328 | orchestrator | 2026-03-11 00:02:19.808332 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2026-03-11 00:02:19.808336 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:19.808340 | orchestrator | + device = (known after apply) 2026-03-11 00:02:19.808344 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808347 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:19.808353 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808361 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:19.808365 | orchestrator | } 2026-03-11 00:02:19.808369 | orchestrator | 2026-03-11 00:02:19.808373 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2026-03-11 00:02:19.808377 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:19.808381 | orchestrator | + device = (known after apply) 2026-03-11 00:02:19.808384 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808388 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:19.808396 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808400 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:19.808403 | orchestrator | } 2026-03-11 00:02:19.808407 | orchestrator | 2026-03-11 00:02:19.808411 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2026-03-11 00:02:19.808415 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:19.808418 | orchestrator | + device = (known after apply) 2026-03-11 00:02:19.808422 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808426 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:19.808430 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808436 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:19.808440 | orchestrator | } 2026-03-11 00:02:19.808444 | orchestrator | 2026-03-11 00:02:19.808448 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2026-03-11 00:02:19.808451 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:19.808455 | orchestrator | + device = (known after apply) 2026-03-11 00:02:19.808459 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808463 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:19.808466 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808470 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:19.808474 | orchestrator | } 2026-03-11 00:02:19.808478 | orchestrator | 2026-03-11 00:02:19.808481 | orchestrator | # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2026-03-11 00:02:19.808485 | orchestrator | + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2026-03-11 00:02:19.808489 | orchestrator | + device = (known after apply) 2026-03-11 00:02:19.808493 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808496 | orchestrator | + instance_id = (known after apply) 2026-03-11 00:02:19.808500 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808504 | orchestrator | + volume_id = (known after apply) 2026-03-11 00:02:19.808508 | orchestrator | } 2026-03-11 00:02:19.808511 | orchestrator | 2026-03-11 00:02:19.808515 | orchestrator | # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2026-03-11 00:02:19.808520 | orchestrator | + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2026-03-11 00:02:19.808524 | orchestrator | + fixed_ip = (known after apply) 2026-03-11 00:02:19.808527 | orchestrator | + floating_ip = (known after apply) 2026-03-11 00:02:19.808531 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808535 | orchestrator | + port_id = (known after apply) 2026-03-11 00:02:19.808538 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808542 | orchestrator | } 2026-03-11 00:02:19.808546 | orchestrator | 2026-03-11 00:02:19.808550 | orchestrator | # openstack_networking_floatingip_v2.manager_floating_ip will be created 2026-03-11 00:02:19.808553 | orchestrator | + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2026-03-11 00:02:19.808557 | orchestrator | + address = (known after apply) 2026-03-11 00:02:19.808561 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.808565 | orchestrator | + dns_domain = (known after apply) 2026-03-11 00:02:19.808568 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:19.808572 | orchestrator | + fixed_ip = (known after apply) 2026-03-11 00:02:19.808576 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808579 | orchestrator | + pool = "public" 2026-03-11 00:02:19.808583 | orchestrator | + port_id = (known after apply) 2026-03-11 00:02:19.808587 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808591 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:19.808594 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.808598 | orchestrator | } 2026-03-11 00:02:19.808602 | orchestrator | 2026-03-11 00:02:19.808606 | orchestrator | # openstack_networking_network_v2.net_management will be created 2026-03-11 00:02:19.808609 | orchestrator | + resource "openstack_networking_network_v2" "net_management" { 2026-03-11 00:02:19.808613 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:19.808617 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.808621 | orchestrator | + availability_zone_hints = [ 2026-03-11 00:02:19.808637 | orchestrator | + "nova", 2026-03-11 00:02:19.808641 | orchestrator | ] 2026-03-11 00:02:19.808645 | orchestrator | + dns_domain = (known after apply) 2026-03-11 00:02:19.808648 | orchestrator | + external = (known after apply) 2026-03-11 00:02:19.808652 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808656 | orchestrator | + mtu = (known after apply) 2026-03-11 00:02:19.808660 | orchestrator | + name = "net-testbed-management" 2026-03-11 00:02:19.808663 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:19.808670 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:19.808674 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808678 | orchestrator | + shared = (known after apply) 2026-03-11 00:02:19.808681 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.808685 | orchestrator | + transparent_vlan = (known after apply) 2026-03-11 00:02:19.808689 | orchestrator | 2026-03-11 00:02:19.808693 | orchestrator | + segments (known after apply) 2026-03-11 00:02:19.808696 | orchestrator | } 2026-03-11 00:02:19.808700 | orchestrator | 2026-03-11 00:02:19.808704 | orchestrator | # openstack_networking_port_v2.manager_port_management will be created 2026-03-11 00:02:19.808708 | orchestrator | + resource "openstack_networking_port_v2" "manager_port_management" { 2026-03-11 00:02:19.808712 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:19.808716 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:19.808720 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:19.808726 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.808730 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:19.808734 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:19.808738 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:19.808742 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:19.808749 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808753 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:19.808756 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:19.808760 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:19.808764 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:19.808768 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808771 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:19.808775 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.808779 | orchestrator | 2026-03-11 00:02:19.808783 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.808786 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:19.808790 | orchestrator | } 2026-03-11 00:02:19.808794 | orchestrator | 2026-03-11 00:02:19.808798 | orchestrator | + binding (known after apply) 2026-03-11 00:02:19.808801 | orchestrator | 2026-03-11 00:02:19.808805 | orchestrator | + fixed_ip { 2026-03-11 00:02:19.808809 | orchestrator | + ip_address = "192.168.16.5" 2026-03-11 00:02:19.808813 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:19.808816 | orchestrator | } 2026-03-11 00:02:19.808820 | orchestrator | } 2026-03-11 00:02:19.808824 | orchestrator | 2026-03-11 00:02:19.808828 | orchestrator | # openstack_networking_port_v2.node_port_management[0] will be created 2026-03-11 00:02:19.808831 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-11 00:02:19.808835 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:19.808839 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:19.808843 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:19.808846 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.808850 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:19.808854 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:19.808858 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:19.808861 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:19.808865 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.808869 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:19.808873 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:19.808876 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:19.808880 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:19.808884 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.808890 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:19.808894 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.808897 | orchestrator | 2026-03-11 00:02:19.808901 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.808905 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-11 00:02:19.808909 | orchestrator | } 2026-03-11 00:02:19.808913 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.808916 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:19.808920 | orchestrator | } 2026-03-11 00:02:19.808924 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.808927 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-11 00:02:19.808931 | orchestrator | } 2026-03-11 00:02:19.808935 | orchestrator | 2026-03-11 00:02:19.808939 | orchestrator | + binding (known after apply) 2026-03-11 00:02:19.808943 | orchestrator | 2026-03-11 00:02:19.808946 | orchestrator | + fixed_ip { 2026-03-11 00:02:19.808950 | orchestrator | + ip_address = "192.168.16.10" 2026-03-11 00:02:19.808954 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:19.808958 | orchestrator | } 2026-03-11 00:02:19.808961 | orchestrator | } 2026-03-11 00:02:19.808965 | orchestrator | 2026-03-11 00:02:19.808969 | orchestrator | # openstack_networking_port_v2.node_port_management[1] will be created 2026-03-11 00:02:19.808973 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-11 00:02:19.808976 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:19.808980 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:19.808984 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:19.808988 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.808991 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:19.808995 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:19.808999 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:19.809002 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:19.809006 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.809010 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:19.809014 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:19.809017 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:19.809021 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:19.809025 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.809028 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:19.809032 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.809036 | orchestrator | 2026-03-11 00:02:19.809040 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809044 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-11 00:02:19.809047 | orchestrator | } 2026-03-11 00:02:19.809051 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809055 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:19.809059 | orchestrator | } 2026-03-11 00:02:19.809062 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809066 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-11 00:02:19.809070 | orchestrator | } 2026-03-11 00:02:19.809073 | orchestrator | 2026-03-11 00:02:19.809077 | orchestrator | + binding (known after apply) 2026-03-11 00:02:19.809081 | orchestrator | 2026-03-11 00:02:19.809085 | orchestrator | + fixed_ip { 2026-03-11 00:02:19.809088 | orchestrator | + ip_address = "192.168.16.11" 2026-03-11 00:02:19.809092 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:19.809096 | orchestrator | } 2026-03-11 00:02:19.809100 | orchestrator | } 2026-03-11 00:02:19.809103 | orchestrator | 2026-03-11 00:02:19.809107 | orchestrator | # openstack_networking_port_v2.node_port_management[2] will be created 2026-03-11 00:02:19.809111 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-11 00:02:19.809115 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:19.809119 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:19.809123 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:19.809127 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.809134 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:19.809138 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:19.809142 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:19.809145 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:19.809152 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.809158 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:19.809162 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:19.809166 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:19.809169 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:19.809173 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.809177 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:19.809180 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.809184 | orchestrator | 2026-03-11 00:02:19.809188 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809192 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-11 00:02:19.809195 | orchestrator | } 2026-03-11 00:02:19.809199 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809203 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:19.809207 | orchestrator | } 2026-03-11 00:02:19.809210 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809214 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-11 00:02:19.809218 | orchestrator | } 2026-03-11 00:02:19.809222 | orchestrator | 2026-03-11 00:02:19.809225 | orchestrator | + binding (known after apply) 2026-03-11 00:02:19.809229 | orchestrator | 2026-03-11 00:02:19.809233 | orchestrator | + fixed_ip { 2026-03-11 00:02:19.809237 | orchestrator | + ip_address = "192.168.16.12" 2026-03-11 00:02:19.809240 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:19.809244 | orchestrator | } 2026-03-11 00:02:19.809248 | orchestrator | } 2026-03-11 00:02:19.809252 | orchestrator | 2026-03-11 00:02:19.809255 | orchestrator | # openstack_networking_port_v2.node_port_management[3] will be created 2026-03-11 00:02:19.809259 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-11 00:02:19.809263 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:19.809267 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:19.809270 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:19.809274 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.809278 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:19.809282 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:19.809286 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:19.809289 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:19.809293 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.809297 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:19.809301 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:19.809304 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:19.809308 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:19.809312 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.809315 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:19.809319 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.809323 | orchestrator | 2026-03-11 00:02:19.809327 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809330 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-11 00:02:19.809334 | orchestrator | } 2026-03-11 00:02:19.809338 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809342 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:19.809345 | orchestrator | } 2026-03-11 00:02:19.809349 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809353 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-11 00:02:19.809356 | orchestrator | } 2026-03-11 00:02:19.809360 | orchestrator | 2026-03-11 00:02:19.809368 | orchestrator | + binding (known after apply) 2026-03-11 00:02:19.809371 | orchestrator | 2026-03-11 00:02:19.809375 | orchestrator | + fixed_ip { 2026-03-11 00:02:19.809379 | orchestrator | + ip_address = "192.168.16.13" 2026-03-11 00:02:19.809383 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:19.809386 | orchestrator | } 2026-03-11 00:02:19.809390 | orchestrator | } 2026-03-11 00:02:19.809394 | orchestrator | 2026-03-11 00:02:19.809398 | orchestrator | # openstack_networking_port_v2.node_port_management[4] will be created 2026-03-11 00:02:19.809401 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-11 00:02:19.809405 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:19.809409 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:19.809412 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:19.809416 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.809420 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:19.809424 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:19.809427 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:19.809431 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:19.809435 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.809439 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:19.809443 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:19.809447 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:19.809451 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:19.809455 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.809458 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:19.809462 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.809466 | orchestrator | 2026-03-11 00:02:19.809470 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809474 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-11 00:02:19.809478 | orchestrator | } 2026-03-11 00:02:19.809481 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809485 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:19.809489 | orchestrator | } 2026-03-11 00:02:19.809493 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809496 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-11 00:02:19.809500 | orchestrator | } 2026-03-11 00:02:19.809504 | orchestrator | 2026-03-11 00:02:19.809508 | orchestrator | + binding (known after apply) 2026-03-11 00:02:19.809512 | orchestrator | 2026-03-11 00:02:19.809515 | orchestrator | + fixed_ip { 2026-03-11 00:02:19.809519 | orchestrator | + ip_address = "192.168.16.14" 2026-03-11 00:02:19.809523 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:19.809527 | orchestrator | } 2026-03-11 00:02:19.809531 | orchestrator | } 2026-03-11 00:02:19.809534 | orchestrator | 2026-03-11 00:02:19.809538 | orchestrator | # openstack_networking_port_v2.node_port_management[5] will be created 2026-03-11 00:02:19.809542 | orchestrator | + resource "openstack_networking_port_v2" "node_port_management" { 2026-03-11 00:02:19.809545 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:19.809549 | orchestrator | + all_fixed_ips = (known after apply) 2026-03-11 00:02:19.809553 | orchestrator | + all_security_group_ids = (known after apply) 2026-03-11 00:02:19.809557 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.809560 | orchestrator | + device_id = (known after apply) 2026-03-11 00:02:19.809564 | orchestrator | + device_owner = (known after apply) 2026-03-11 00:02:19.809570 | orchestrator | + dns_assignment = (known after apply) 2026-03-11 00:02:19.809574 | orchestrator | + dns_name = (known after apply) 2026-03-11 00:02:19.809578 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.809581 | orchestrator | + mac_address = (known after apply) 2026-03-11 00:02:19.809585 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:19.809589 | orchestrator | + port_security_enabled = (known after apply) 2026-03-11 00:02:19.809593 | orchestrator | + qos_policy_id = (known after apply) 2026-03-11 00:02:19.809602 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.809605 | orchestrator | + security_group_ids = (known after apply) 2026-03-11 00:02:19.809609 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.809613 | orchestrator | 2026-03-11 00:02:19.809617 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809620 | orchestrator | + ip_address = "192.168.16.254/32" 2026-03-11 00:02:19.809647 | orchestrator | } 2026-03-11 00:02:19.809652 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809655 | orchestrator | + ip_address = "192.168.16.8/32" 2026-03-11 00:02:19.809659 | orchestrator | } 2026-03-11 00:02:19.809663 | orchestrator | + allowed_address_pairs { 2026-03-11 00:02:19.809667 | orchestrator | + ip_address = "192.168.16.9/32" 2026-03-11 00:02:19.809670 | orchestrator | } 2026-03-11 00:02:19.809674 | orchestrator | 2026-03-11 00:02:19.809680 | orchestrator | + binding (known after apply) 2026-03-11 00:02:19.809684 | orchestrator | 2026-03-11 00:02:19.809688 | orchestrator | + fixed_ip { 2026-03-11 00:02:19.809692 | orchestrator | + ip_address = "192.168.16.15" 2026-03-11 00:02:19.809695 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:19.809699 | orchestrator | } 2026-03-11 00:02:19.809703 | orchestrator | } 2026-03-11 00:02:19.809707 | orchestrator | 2026-03-11 00:02:19.809710 | orchestrator | # openstack_networking_router_interface_v2.router_interface will be created 2026-03-11 00:02:19.809714 | orchestrator | + resource "openstack_networking_router_interface_v2" "router_interface" { 2026-03-11 00:02:19.809718 | orchestrator | + force_destroy = false 2026-03-11 00:02:19.809722 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.809725 | orchestrator | + port_id = (known after apply) 2026-03-11 00:02:19.809729 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.809733 | orchestrator | + router_id = (known after apply) 2026-03-11 00:02:19.809736 | orchestrator | + subnet_id = (known after apply) 2026-03-11 00:02:19.809740 | orchestrator | } 2026-03-11 00:02:19.818710 | orchestrator | 2026-03-11 00:02:19.818851 | orchestrator | # openstack_networking_router_v2.router will be created 2026-03-11 00:02:19.818870 | orchestrator | + resource "openstack_networking_router_v2" "router" { 2026-03-11 00:02:19.818881 | orchestrator | + admin_state_up = (known after apply) 2026-03-11 00:02:19.818892 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.818901 | orchestrator | + availability_zone_hints = [ 2026-03-11 00:02:19.818912 | orchestrator | + "nova", 2026-03-11 00:02:19.818995 | orchestrator | ] 2026-03-11 00:02:19.819008 | orchestrator | + distributed = (known after apply) 2026-03-11 00:02:19.819018 | orchestrator | + enable_snat = (known after apply) 2026-03-11 00:02:19.819028 | orchestrator | + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2026-03-11 00:02:19.819038 | orchestrator | + external_qos_policy_id = (known after apply) 2026-03-11 00:02:19.819048 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.819058 | orchestrator | + name = "testbed" 2026-03-11 00:02:19.819069 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.819078 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.819088 | orchestrator | 2026-03-11 00:02:19.819098 | orchestrator | + external_fixed_ip (known after apply) 2026-03-11 00:02:19.819108 | orchestrator | } 2026-03-11 00:02:19.819358 | orchestrator | 2026-03-11 00:02:19.819390 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2026-03-11 00:02:19.819402 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2026-03-11 00:02:19.819412 | orchestrator | + description = "ssh" 2026-03-11 00:02:19.819422 | orchestrator | + direction = "ingress" 2026-03-11 00:02:19.819432 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:19.819442 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.819452 | orchestrator | + port_range_max = 22 2026-03-11 00:02:19.819462 | orchestrator | + port_range_min = 22 2026-03-11 00:02:19.819472 | orchestrator | + protocol = "tcp" 2026-03-11 00:02:19.819481 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.819507 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:19.819517 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:19.819527 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:19.819537 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:19.819546 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.819556 | orchestrator | } 2026-03-11 00:02:19.819782 | orchestrator | 2026-03-11 00:02:19.819826 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2026-03-11 00:02:19.819838 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2026-03-11 00:02:19.819848 | orchestrator | + description = "wireguard" 2026-03-11 00:02:19.819858 | orchestrator | + direction = "ingress" 2026-03-11 00:02:19.819868 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:19.819878 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.819888 | orchestrator | + port_range_max = 51820 2026-03-11 00:02:19.819898 | orchestrator | + port_range_min = 51820 2026-03-11 00:02:19.819907 | orchestrator | + protocol = "udp" 2026-03-11 00:02:19.819917 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.819927 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:19.819937 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:19.819947 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:19.819957 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:19.819967 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.819977 | orchestrator | } 2026-03-11 00:02:19.820120 | orchestrator | 2026-03-11 00:02:19.820148 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2026-03-11 00:02:19.820159 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2026-03-11 00:02:19.820170 | orchestrator | + direction = "ingress" 2026-03-11 00:02:19.820179 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:19.820189 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.820198 | orchestrator | + protocol = "tcp" 2026-03-11 00:02:19.820208 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.820218 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:19.820227 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:19.820236 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-11 00:02:19.820246 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:19.820256 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.820266 | orchestrator | } 2026-03-11 00:02:19.820419 | orchestrator | 2026-03-11 00:02:19.820448 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2026-03-11 00:02:19.820459 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2026-03-11 00:02:19.820469 | orchestrator | + direction = "ingress" 2026-03-11 00:02:19.820479 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:19.820489 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.820499 | orchestrator | + protocol = "udp" 2026-03-11 00:02:19.820508 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.820518 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:19.820528 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:19.820538 | orchestrator | + remote_ip_prefix = "192.168.16.0/20" 2026-03-11 00:02:19.820548 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:19.820558 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.820568 | orchestrator | } 2026-03-11 00:02:19.820811 | orchestrator | 2026-03-11 00:02:19.820845 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2026-03-11 00:02:19.820863 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2026-03-11 00:02:19.820872 | orchestrator | + direction = "ingress" 2026-03-11 00:02:19.820880 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:19.820887 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.820895 | orchestrator | + protocol = "icmp" 2026-03-11 00:02:19.820903 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.820911 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:19.820920 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:19.820928 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:19.820936 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:19.820944 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.820952 | orchestrator | } 2026-03-11 00:02:19.821079 | orchestrator | 2026-03-11 00:02:19.821102 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2026-03-11 00:02:19.821111 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2026-03-11 00:02:19.821120 | orchestrator | + direction = "ingress" 2026-03-11 00:02:19.821128 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:19.821136 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.821144 | orchestrator | + protocol = "tcp" 2026-03-11 00:02:19.821152 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.821160 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:19.821175 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:19.821183 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:19.821191 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:19.821199 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.821207 | orchestrator | } 2026-03-11 00:02:19.821334 | orchestrator | 2026-03-11 00:02:19.821358 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2026-03-11 00:02:19.821367 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2026-03-11 00:02:19.821376 | orchestrator | + direction = "ingress" 2026-03-11 00:02:19.821384 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:19.821392 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.821400 | orchestrator | + protocol = "udp" 2026-03-11 00:02:19.821407 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.821415 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:19.821423 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:19.821431 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:19.821439 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:19.821447 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.821455 | orchestrator | } 2026-03-11 00:02:19.821590 | orchestrator | 2026-03-11 00:02:19.821614 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2026-03-11 00:02:19.821624 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2026-03-11 00:02:19.821654 | orchestrator | + direction = "ingress" 2026-03-11 00:02:19.821666 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:19.821674 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.821682 | orchestrator | + protocol = "icmp" 2026-03-11 00:02:19.821690 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.821699 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:19.821707 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:19.821715 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:19.821723 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:19.821731 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.821745 | orchestrator | } 2026-03-11 00:02:19.821874 | orchestrator | 2026-03-11 00:02:19.821897 | orchestrator | # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2026-03-11 00:02:19.821906 | orchestrator | + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2026-03-11 00:02:19.821915 | orchestrator | + description = "vrrp" 2026-03-11 00:02:19.821923 | orchestrator | + direction = "ingress" 2026-03-11 00:02:19.821931 | orchestrator | + ethertype = "IPv4" 2026-03-11 00:02:19.821939 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.821946 | orchestrator | + protocol = "112" 2026-03-11 00:02:19.821954 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.821962 | orchestrator | + remote_address_group_id = (known after apply) 2026-03-11 00:02:19.821970 | orchestrator | + remote_group_id = (known after apply) 2026-03-11 00:02:19.821978 | orchestrator | + remote_ip_prefix = "0.0.0.0/0" 2026-03-11 00:02:19.821986 | orchestrator | + security_group_id = (known after apply) 2026-03-11 00:02:19.821994 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.822002 | orchestrator | } 2026-03-11 00:02:19.822131 | orchestrator | 2026-03-11 00:02:19.822157 | orchestrator | # openstack_networking_secgroup_v2.security_group_management will be created 2026-03-11 00:02:19.822167 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_management" { 2026-03-11 00:02:19.822175 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.822183 | orchestrator | + description = "management security group" 2026-03-11 00:02:19.822191 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.822199 | orchestrator | + name = "testbed-management" 2026-03-11 00:02:19.822207 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.822215 | orchestrator | + stateful = (known after apply) 2026-03-11 00:02:19.822223 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.822230 | orchestrator | } 2026-03-11 00:02:19.822334 | orchestrator | 2026-03-11 00:02:19.822357 | orchestrator | # openstack_networking_secgroup_v2.security_group_node will be created 2026-03-11 00:02:19.822366 | orchestrator | + resource "openstack_networking_secgroup_v2" "security_group_node" { 2026-03-11 00:02:19.822374 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.822382 | orchestrator | + description = "node security group" 2026-03-11 00:02:19.822390 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.822398 | orchestrator | + name = "testbed-node" 2026-03-11 00:02:19.822406 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.822414 | orchestrator | + stateful = (known after apply) 2026-03-11 00:02:19.822422 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.822430 | orchestrator | } 2026-03-11 00:02:19.822681 | orchestrator | 2026-03-11 00:02:19.822708 | orchestrator | # openstack_networking_subnet_v2.subnet_management will be created 2026-03-11 00:02:19.822717 | orchestrator | + resource "openstack_networking_subnet_v2" "subnet_management" { 2026-03-11 00:02:19.822725 | orchestrator | + all_tags = (known after apply) 2026-03-11 00:02:19.822733 | orchestrator | + cidr = "192.168.16.0/20" 2026-03-11 00:02:19.822741 | orchestrator | + dns_nameservers = [ 2026-03-11 00:02:19.822749 | orchestrator | + "8.8.8.8", 2026-03-11 00:02:19.822757 | orchestrator | + "9.9.9.9", 2026-03-11 00:02:19.822765 | orchestrator | ] 2026-03-11 00:02:19.822773 | orchestrator | + enable_dhcp = true 2026-03-11 00:02:19.822781 | orchestrator | + gateway_ip = (known after apply) 2026-03-11 00:02:19.822789 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.822797 | orchestrator | + ip_version = 4 2026-03-11 00:02:19.822805 | orchestrator | + ipv6_address_mode = (known after apply) 2026-03-11 00:02:19.822813 | orchestrator | + ipv6_ra_mode = (known after apply) 2026-03-11 00:02:19.822821 | orchestrator | + name = "subnet-testbed-management" 2026-03-11 00:02:19.822829 | orchestrator | + network_id = (known after apply) 2026-03-11 00:02:19.822837 | orchestrator | + no_gateway = false 2026-03-11 00:02:19.822845 | orchestrator | + region = (known after apply) 2026-03-11 00:02:19.822852 | orchestrator | + service_types = (known after apply) 2026-03-11 00:02:19.822867 | orchestrator | + tenant_id = (known after apply) 2026-03-11 00:02:19.822875 | orchestrator | 2026-03-11 00:02:19.822883 | orchestrator | + allocation_pool { 2026-03-11 00:02:19.822891 | orchestrator | + end = "192.168.31.250" 2026-03-11 00:02:19.822899 | orchestrator | + start = "192.168.31.200" 2026-03-11 00:02:19.822907 | orchestrator | } 2026-03-11 00:02:19.822915 | orchestrator | } 2026-03-11 00:02:19.822982 | orchestrator | 2026-03-11 00:02:19.823005 | orchestrator | # terraform_data.image will be created 2026-03-11 00:02:19.823014 | orchestrator | + resource "terraform_data" "image" { 2026-03-11 00:02:19.823022 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.823030 | orchestrator | + input = "Ubuntu 24.04" 2026-03-11 00:02:19.823038 | orchestrator | + output = (known after apply) 2026-03-11 00:02:19.823046 | orchestrator | } 2026-03-11 00:02:19.823100 | orchestrator | 2026-03-11 00:02:19.823122 | orchestrator | # terraform_data.image_node will be created 2026-03-11 00:02:19.823131 | orchestrator | + resource "terraform_data" "image_node" { 2026-03-11 00:02:19.823140 | orchestrator | + id = (known after apply) 2026-03-11 00:02:19.823148 | orchestrator | + input = "Ubuntu 24.04" 2026-03-11 00:02:19.823155 | orchestrator | + output = (known after apply) 2026-03-11 00:02:19.823163 | orchestrator | } 2026-03-11 00:02:19.823191 | orchestrator | 2026-03-11 00:02:19.823200 | orchestrator | Plan: 64 to add, 0 to change, 0 to destroy. 2026-03-11 00:02:19.823222 | orchestrator | 2026-03-11 00:02:19.823231 | orchestrator | Changes to Outputs: 2026-03-11 00:02:19.823251 | orchestrator | + manager_address = (sensitive value) 2026-03-11 00:02:19.823260 | orchestrator | + private_key = (sensitive value) 2026-03-11 00:02:19.991716 | orchestrator | terraform_data.image: Creating... 2026-03-11 00:02:19.991765 | orchestrator | terraform_data.image_node: Creating... 2026-03-11 00:02:19.991772 | orchestrator | terraform_data.image: Creation complete after 0s [id=75ce2175-2f53-d9a2-5247-4e738b677942] 2026-03-11 00:02:20.045577 | orchestrator | terraform_data.image_node: Creation complete after 0s [id=46e29aeb-d70a-a3f1-a195-f162070f15ee] 2026-03-11 00:02:20.066938 | orchestrator | data.openstack_images_image_v2.image: Reading... 2026-03-11 00:02:20.067697 | orchestrator | data.openstack_images_image_v2.image_node: Reading... 2026-03-11 00:02:20.082801 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2026-03-11 00:02:20.089087 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2026-03-11 00:02:20.089139 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2026-03-11 00:02:20.089146 | orchestrator | openstack_networking_network_v2.net_management: Creating... 2026-03-11 00:02:20.089151 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2026-03-11 00:02:20.090005 | orchestrator | openstack_compute_keypair_v2.key: Creating... 2026-03-11 00:02:20.092481 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2026-03-11 00:02:20.098488 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2026-03-11 00:02:20.585828 | orchestrator | data.openstack_images_image_v2.image: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-11 00:02:20.588140 | orchestrator | data.openstack_images_image_v2.image_node: Read complete after 1s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2026-03-11 00:02:20.590745 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2026-03-11 00:02:20.594991 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2026-03-11 00:02:20.752815 | orchestrator | openstack_compute_keypair_v2.key: Creation complete after 1s [id=testbed] 2026-03-11 00:02:20.758098 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2026-03-11 00:02:21.192820 | orchestrator | openstack_networking_network_v2.net_management: Creation complete after 1s [id=6e8d8db9-0d5c-441f-9ff9-3f9f2fe1ceb0] 2026-03-11 00:02:21.197793 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2026-03-11 00:02:23.696614 | orchestrator | openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 4s [id=eb5be362-3b33-4846-8138-86194f5d1a8a] 2026-03-11 00:02:23.702171 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2026-03-11 00:02:23.719568 | orchestrator | openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 4s [id=ae1c2658-52b8-455d-907b-e7170e3050e5] 2026-03-11 00:02:23.730731 | orchestrator | local_file.id_rsa_pub: Creating... 2026-03-11 00:02:23.737015 | orchestrator | local_file.id_rsa_pub: Creation complete after 0s [id=160e0cfc401e2dc5288ac45d1d83e4b2a6235d50] 2026-03-11 00:02:23.745193 | orchestrator | openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 4s [id=8ff314bd-8772-4cae-a8e3-239e2ae43cb3] 2026-03-11 00:02:23.747888 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2026-03-11 00:02:23.752911 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2026-03-11 00:02:23.767347 | orchestrator | openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 4s [id=093a0f58-cc4b-4485-9e6f-5c5128ebf642] 2026-03-11 00:02:23.776970 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2026-03-11 00:02:23.779062 | orchestrator | openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 4s [id=f36f8e1d-14c5-427c-b242-d446b19c77db] 2026-03-11 00:02:23.788416 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2026-03-11 00:02:23.790455 | orchestrator | openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 4s [id=b058385a-4b50-41f2-be6b-aeff7a6e6499] 2026-03-11 00:02:23.795602 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2026-03-11 00:02:23.838121 | orchestrator | openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=7fe845d7-e58c-4b3d-846a-c114ba83f0c4] 2026-03-11 00:02:23.844784 | orchestrator | local_sensitive_file.id_rsa: Creating... 2026-03-11 00:02:23.848462 | orchestrator | openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=fc665229-5891-49fd-b2c5-1ba6ac78c628] 2026-03-11 00:02:23.851542 | orchestrator | local_sensitive_file.id_rsa: Creation complete after 0s [id=74ae1f22cec48d690699031d77097eff3998552b] 2026-03-11 00:02:23.854568 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creating... 2026-03-11 00:02:23.960937 | orchestrator | openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=288642ce-5fa9-4bc7-a508-61d675ea6136] 2026-03-11 00:02:24.537019 | orchestrator | openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=8e030f3f-8c11-4c7e-87dd-0a510df75d92] 2026-03-11 00:02:26.074581 | orchestrator | openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=c368a630-9377-465f-9e5b-c26fca23363e] 2026-03-11 00:02:26.074622 | orchestrator | openstack_networking_router_v2.router: Creating... 2026-03-11 00:02:27.081649 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 3s [id=0c7e2588-12fd-42af-aa14-3920652e8891] 2026-03-11 00:02:27.136203 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=cf393f00-e485-43dd-9184-e931a616dca6] 2026-03-11 00:02:27.172078 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 3s [id=b5772bb3-dfe8-42a5-804b-c4140f3b8e5a] 2026-03-11 00:02:27.194341 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 3s [id=fbcd8f10-01e6-46d3-8161-dd0ec29d23f2] 2026-03-11 00:02:27.225061 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 3s [id=32780fff-28da-4ed5-b9f8-cc520a8285e8] 2026-03-11 00:02:27.230132 | orchestrator | openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 3s [id=dc47894e-e8a2-41fd-b2d5-937966a93d0e] 2026-03-11 00:02:27.786952 | orchestrator | openstack_networking_router_v2.router: Creation complete after 3s [id=b87f609a-0559-480e-b586-b5461aee14ba] 2026-03-11 00:02:27.791627 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creating... 2026-03-11 00:02:27.796163 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creating... 2026-03-11 00:02:27.796204 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creating... 2026-03-11 00:02:28.033619 | orchestrator | openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=7828f964-a7c3-4484-ab77-17862173098a] 2026-03-11 00:02:28.044270 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2026-03-11 00:02:28.048185 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2026-03-11 00:02:28.056876 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creating... 2026-03-11 00:02:28.056980 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creating... 2026-03-11 00:02:28.062246 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2026-03-11 00:02:28.068404 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creating... 2026-03-11 00:02:28.068829 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creating... 2026-03-11 00:02:28.069160 | orchestrator | openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=e6f59a68-ec93-48a2-9565-a88a39df8b99] 2026-03-11 00:02:28.072261 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creating... 2026-03-11 00:02:28.092650 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creating... 2026-03-11 00:02:28.299181 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=6822b6a3-59ff-4d58-a13d-f06e63ed4321] 2026-03-11 00:02:28.304675 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2026-03-11 00:02:28.557608 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=043f8f03-15cc-4beb-b210-3f5dfd35fab9] 2026-03-11 00:02:28.564090 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2026-03-11 00:02:28.785397 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 0s [id=8d7ac772-5c80-41b0-9f36-1b54e977a6f5] 2026-03-11 00:02:28.789780 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2026-03-11 00:02:28.796423 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=ea797922-2e64-4862-98dd-6f91f19c701c] 2026-03-11 00:02:28.806105 | orchestrator | openstack_networking_port_v2.manager_port_management: Creating... 2026-03-11 00:02:28.993260 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=f397afe7-63d9-472b-af13-814e28d48686] 2026-03-11 00:02:28.999589 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2026-03-11 00:02:29.006501 | orchestrator | openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=8bb4d711-2f66-4ac7-b847-7b84312f3e3a] 2026-03-11 00:02:29.012441 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2026-03-11 00:02:29.092141 | orchestrator | openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=a495151b-a411-4e1b-baf1-08934b657fd7] 2026-03-11 00:02:29.098677 | orchestrator | openstack_networking_port_v2.node_port_management[0]: Creation complete after 1s [id=1f27bd64-8d5e-4649-aaf2-adb12de1d78f] 2026-03-11 00:02:29.100217 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2026-03-11 00:02:29.101901 | orchestrator | openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=71b9aa84-419d-407f-9b8c-7da54b2efb79] 2026-03-11 00:02:29.164091 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=ae905661-4b02-4710-8109-c73b1f1e74db] 2026-03-11 00:02:29.300275 | orchestrator | openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=f84fe94f-5681-4fab-a051-1e7c2f04a800] 2026-03-11 00:02:29.329262 | orchestrator | openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=32854ab3-b687-4d0d-8f4e-5e7a5da5ad9f] 2026-03-11 00:02:29.352004 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=d7d5b328-1a6b-4ab8-bbd0-d46b2509328e] 2026-03-11 00:02:29.373739 | orchestrator | openstack_networking_port_v2.manager_port_management: Creation complete after 0s [id=c2e46856-afa2-453c-a2de-a3245e5e30a3] 2026-03-11 00:02:29.514765 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 1s [id=4f34e1f1-f587-4f2c-8504-ef2eec7ad8ae] 2026-03-11 00:02:29.810216 | orchestrator | openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=42112c28-e4e7-448e-b952-07b6017aeab2] 2026-03-11 00:02:31.316533 | orchestrator | openstack_networking_router_interface_v2.router_interface: Creation complete after 3s [id=313938ba-4275-4823-9e91-1ce9a5089ed8] 2026-03-11 00:02:31.345130 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2026-03-11 00:02:31.356804 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creating... 2026-03-11 00:02:31.357279 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creating... 2026-03-11 00:02:31.357463 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creating... 2026-03-11 00:02:31.364183 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creating... 2026-03-11 00:02:31.370391 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creating... 2026-03-11 00:02:31.395299 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creating... 2026-03-11 00:02:32.981512 | orchestrator | openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 2s [id=01e3531e-bd32-4012-8048-a775743fc83c] 2026-03-11 00:02:32.990385 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2026-03-11 00:02:32.999712 | orchestrator | local_file.inventory: Creating... 2026-03-11 00:02:33.001196 | orchestrator | local_file.MANAGER_ADDRESS: Creating... 2026-03-11 00:02:33.003593 | orchestrator | local_file.inventory: Creation complete after 0s [id=f56510665796bab980a96e8c08586eecf780422b] 2026-03-11 00:02:33.008087 | orchestrator | local_file.MANAGER_ADDRESS: Creation complete after 0s [id=ac382b97543fb64b5c180388f5f2911f26b0c46a] 2026-03-11 00:02:34.758475 | orchestrator | openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 2s [id=01e3531e-bd32-4012-8048-a775743fc83c] 2026-03-11 00:02:41.360537 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2026-03-11 00:02:41.363911 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2026-03-11 00:02:41.364027 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2026-03-11 00:02:41.367110 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2026-03-11 00:02:41.376521 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2026-03-11 00:02:41.396978 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2026-03-11 00:02:51.368930 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2026-03-11 00:02:51.369030 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2026-03-11 00:02:51.369041 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2026-03-11 00:02:51.369048 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2026-03-11 00:02:51.377361 | orchestrator | openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2026-03-11 00:02:51.397944 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2026-03-11 00:02:52.193761 | orchestrator | openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=09bf6a50-1505-4306-b500-12babac3af19] 2026-03-11 00:03:01.369331 | orchestrator | openstack_compute_instance_v2.node_server[1]: Still creating... [30s elapsed] 2026-03-11 00:03:01.369421 | orchestrator | openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2026-03-11 00:03:01.369438 | orchestrator | openstack_compute_instance_v2.node_server[0]: Still creating... [30s elapsed] 2026-03-11 00:03:01.369444 | orchestrator | openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2026-03-11 00:03:01.398830 | orchestrator | openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2026-03-11 00:03:02.307732 | orchestrator | openstack_compute_instance_v2.node_server[0]: Creation complete after 31s [id=220e93ce-1154-4864-9bc0-2f91591d6841] 2026-03-11 00:03:02.438617 | orchestrator | openstack_compute_instance_v2.node_server[1]: Creation complete after 31s [id=786afe7b-b3c7-442e-a6f5-c484ff90cfb6] 2026-03-11 00:03:02.531355 | orchestrator | openstack_compute_instance_v2.node_server[2]: Creation complete after 32s [id=78c2bf6f-829d-4c58-8256-9aa769bdd40a] 2026-03-11 00:03:02.688174 | orchestrator | openstack_compute_instance_v2.node_server[4]: Creation complete after 32s [id=58a3b99b-de40-40d4-8151-32b9b548d0f3] 2026-03-11 00:03:02.712846 | orchestrator | openstack_compute_instance_v2.node_server[3]: Creation complete after 32s [id=3a4a31c1-dcbd-44f8-a32a-47867b33b2d0] 2026-03-11 00:03:02.735925 | orchestrator | null_resource.node_semaphore: Creating... 2026-03-11 00:03:02.738648 | orchestrator | null_resource.node_semaphore: Creation complete after 0s [id=1704141630895890131] 2026-03-11 00:03:02.742550 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2026-03-11 00:03:02.742612 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2026-03-11 00:03:02.744676 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2026-03-11 00:03:02.751331 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2026-03-11 00:03:02.757169 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2026-03-11 00:03:02.760182 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2026-03-11 00:03:02.762104 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2026-03-11 00:03:02.770138 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2026-03-11 00:03:02.778352 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2026-03-11 00:03:02.779438 | orchestrator | openstack_compute_instance_v2.manager_server: Creating... 2026-03-11 00:03:06.140702 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 3s [id=3a4a31c1-dcbd-44f8-a32a-47867b33b2d0/8ff314bd-8772-4cae-a8e3-239e2ae43cb3] 2026-03-11 00:03:06.157392 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=09bf6a50-1505-4306-b500-12babac3af19/fc665229-5891-49fd-b2c5-1ba6ac78c628] 2026-03-11 00:03:06.227473 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=58a3b99b-de40-40d4-8151-32b9b548d0f3/288642ce-5fa9-4bc7-a508-61d675ea6136] 2026-03-11 00:03:12.267752 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 9s [id=3a4a31c1-dcbd-44f8-a32a-47867b33b2d0/ae1c2658-52b8-455d-907b-e7170e3050e5] 2026-03-11 00:03:12.286538 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=09bf6a50-1505-4306-b500-12babac3af19/b058385a-4b50-41f2-be6b-aeff7a6e6499] 2026-03-11 00:03:12.313546 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=58a3b99b-de40-40d4-8151-32b9b548d0f3/f36f8e1d-14c5-427c-b242-d446b19c77db] 2026-03-11 00:03:12.354985 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 9s [id=3a4a31c1-dcbd-44f8-a32a-47867b33b2d0/093a0f58-cc4b-4485-9e6f-5c5128ebf642] 2026-03-11 00:03:12.389275 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 9s [id=09bf6a50-1505-4306-b500-12babac3af19/7fe845d7-e58c-4b3d-846a-c114ba83f0c4] 2026-03-11 00:03:12.468061 | orchestrator | openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 9s [id=58a3b99b-de40-40d4-8151-32b9b548d0f3/eb5be362-3b33-4846-8138-86194f5d1a8a] 2026-03-11 00:03:12.783857 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2026-03-11 00:03:22.784859 | orchestrator | openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2026-03-11 00:03:23.243342 | orchestrator | openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=c74190dc-3f97-4135-a8c8-ef1e499d2afe] 2026-03-11 00:03:23.255914 | orchestrator | 2026-03-11 00:03:23.255995 | orchestrator | Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2026-03-11 00:03:23.256006 | orchestrator | 2026-03-11 00:03:23.256013 | orchestrator | Outputs: 2026-03-11 00:03:23.256020 | orchestrator | 2026-03-11 00:03:23.256028 | orchestrator | manager_address = 2026-03-11 00:03:23.256037 | orchestrator | private_key = 2026-03-11 00:03:23.623777 | orchestrator | ok: Runtime: 0:01:09.333580 2026-03-11 00:03:23.654215 | 2026-03-11 00:03:23.654346 | TASK [Create infrastructure (stable)] 2026-03-11 00:03:24.189412 | orchestrator | skipping: Conditional result was False 2026-03-11 00:03:24.215404 | 2026-03-11 00:03:24.215593 | TASK [Fetch manager address] 2026-03-11 00:03:24.721576 | orchestrator | ok 2026-03-11 00:03:24.735822 | 2026-03-11 00:03:24.736030 | TASK [Set manager_host address] 2026-03-11 00:03:24.809441 | orchestrator | ok 2026-03-11 00:03:24.816902 | 2026-03-11 00:03:24.817013 | LOOP [Update ansible collections] 2026-03-11 00:03:25.815833 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-11 00:03:25.816719 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-11 00:03:25.816865 | orchestrator | Starting galaxy collection install process 2026-03-11 00:03:25.816899 | orchestrator | Process install dependency map 2026-03-11 00:03:25.816926 | orchestrator | Starting collection install process 2026-03-11 00:03:25.816980 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons' 2026-03-11 00:03:25.818001 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons 2026-03-11 00:03:25.818081 | orchestrator | osism.commons:999.0.0 was installed successfully 2026-03-11 00:03:25.818148 | orchestrator | ok: Item: commons Runtime: 0:00:00.653137 2026-03-11 00:03:26.910721 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2026-03-11 00:03:26.911224 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-11 00:03:26.911268 | orchestrator | Starting galaxy collection install process 2026-03-11 00:03:26.911293 | orchestrator | Process install dependency map 2026-03-11 00:03:26.911316 | orchestrator | Starting collection install process 2026-03-11 00:03:26.911337 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services' 2026-03-11 00:03:26.911358 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/services 2026-03-11 00:03:26.911378 | orchestrator | osism.services:999.0.0 was installed successfully 2026-03-11 00:03:26.911414 | orchestrator | ok: Item: services Runtime: 0:00:00.741728 2026-03-11 00:03:26.934254 | 2026-03-11 00:03:26.934402 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-11 00:03:37.580582 | orchestrator | ok 2026-03-11 00:03:37.592292 | 2026-03-11 00:03:37.592556 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-11 00:04:37.648523 | orchestrator | ok 2026-03-11 00:04:37.658781 | 2026-03-11 00:04:37.658923 | TASK [Fetch manager ssh hostkey] 2026-03-11 00:04:39.230609 | orchestrator | Output suppressed because no_log was given 2026-03-11 00:04:39.246997 | 2026-03-11 00:04:39.247203 | TASK [Get ssh keypair from terraform environment] 2026-03-11 00:04:39.783572 | orchestrator | ok: Runtime: 0:00:00.005976 2026-03-11 00:04:39.800835 | 2026-03-11 00:04:39.801017 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-11 00:04:39.850681 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2026-03-11 00:04:39.860869 | 2026-03-11 00:04:39.861004 | TASK [Run manager part 0] 2026-03-11 00:04:40.816112 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-11 00:04:40.870409 | orchestrator | 2026-03-11 00:04:40.870461 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2026-03-11 00:04:40.870473 | orchestrator | 2026-03-11 00:04:40.870492 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2026-03-11 00:04:42.492207 | orchestrator | ok: [testbed-manager] 2026-03-11 00:04:42.492256 | orchestrator | 2026-03-11 00:04:42.492279 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-11 00:04:42.492289 | orchestrator | 2026-03-11 00:04:42.492298 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:04:44.307624 | orchestrator | ok: [testbed-manager] 2026-03-11 00:04:44.307688 | orchestrator | 2026-03-11 00:04:44.307696 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-11 00:04:44.926339 | orchestrator | ok: [testbed-manager] 2026-03-11 00:04:44.926412 | orchestrator | 2026-03-11 00:04:44.926425 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-11 00:04:44.972032 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:04:44.972106 | orchestrator | 2026-03-11 00:04:44.972121 | orchestrator | TASK [Update package cache] **************************************************** 2026-03-11 00:04:44.999177 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:04:44.999236 | orchestrator | 2026-03-11 00:04:44.999245 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-11 00:04:45.029493 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:04:45.029570 | orchestrator | 2026-03-11 00:04:45.029582 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-11 00:04:45.058857 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:04:45.058952 | orchestrator | 2026-03-11 00:04:45.058960 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-11 00:04:45.089308 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:04:45.089356 | orchestrator | 2026-03-11 00:04:45.089364 | orchestrator | TASK [Fail if Ubuntu version is lower than 24.04] ****************************** 2026-03-11 00:04:45.122875 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:04:45.122917 | orchestrator | 2026-03-11 00:04:45.122925 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2026-03-11 00:04:45.153854 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:04:45.153907 | orchestrator | 2026-03-11 00:04:45.153918 | orchestrator | TASK [Set APT options on manager] ********************************************** 2026-03-11 00:04:45.793054 | orchestrator | changed: [testbed-manager] 2026-03-11 00:04:45.793101 | orchestrator | 2026-03-11 00:04:45.793108 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2026-03-11 00:07:26.101169 | orchestrator | changed: [testbed-manager] 2026-03-11 00:07:26.101282 | orchestrator | 2026-03-11 00:07:26.101312 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-11 00:09:15.103508 | orchestrator | changed: [testbed-manager] 2026-03-11 00:09:15.103724 | orchestrator | 2026-03-11 00:09:15.103747 | orchestrator | TASK [Install required packages] *********************************************** 2026-03-11 00:09:34.740540 | orchestrator | changed: [testbed-manager] 2026-03-11 00:09:34.740702 | orchestrator | 2026-03-11 00:09:34.740726 | orchestrator | TASK [Remove some python packages] ********************************************* 2026-03-11 00:09:43.279105 | orchestrator | changed: [testbed-manager] 2026-03-11 00:09:43.279202 | orchestrator | 2026-03-11 00:09:43.279210 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-11 00:09:43.324341 | orchestrator | ok: [testbed-manager] 2026-03-11 00:09:43.324441 | orchestrator | 2026-03-11 00:09:43.324452 | orchestrator | TASK [Get current user] ******************************************************** 2026-03-11 00:09:44.150200 | orchestrator | ok: [testbed-manager] 2026-03-11 00:09:44.150297 | orchestrator | 2026-03-11 00:09:44.150315 | orchestrator | TASK [Create venv directory] *************************************************** 2026-03-11 00:09:44.959126 | orchestrator | changed: [testbed-manager] 2026-03-11 00:09:44.959242 | orchestrator | 2026-03-11 00:09:44.959469 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2026-03-11 00:09:51.283154 | orchestrator | changed: [testbed-manager] 2026-03-11 00:09:51.283377 | orchestrator | 2026-03-11 00:09:51.283417 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2026-03-11 00:09:57.122367 | orchestrator | changed: [testbed-manager] 2026-03-11 00:09:57.122423 | orchestrator | 2026-03-11 00:09:57.122433 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2026-03-11 00:09:59.777664 | orchestrator | changed: [testbed-manager] 2026-03-11 00:09:59.777718 | orchestrator | 2026-03-11 00:09:59.777727 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2026-03-11 00:10:01.561022 | orchestrator | changed: [testbed-manager] 2026-03-11 00:10:01.561127 | orchestrator | 2026-03-11 00:10:01.561137 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2026-03-11 00:10:02.716056 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-11 00:10:02.716175 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-11 00:10:02.716189 | orchestrator | 2026-03-11 00:10:02.716202 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2026-03-11 00:10:02.760028 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-11 00:10:02.760142 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-11 00:10:02.760155 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-11 00:10:02.760165 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-11 00:10:05.912177 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2026-03-11 00:10:05.912296 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2026-03-11 00:10:05.912311 | orchestrator | 2026-03-11 00:10:05.912325 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2026-03-11 00:10:06.490898 | orchestrator | changed: [testbed-manager] 2026-03-11 00:10:06.490974 | orchestrator | 2026-03-11 00:10:06.490986 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2026-03-11 00:12:27.402224 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2026-03-11 00:12:27.402304 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2026-03-11 00:12:27.402318 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2026-03-11 00:12:27.402327 | orchestrator | 2026-03-11 00:12:27.402337 | orchestrator | TASK [Install local collections] *********************************************** 2026-03-11 00:12:29.730358 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2026-03-11 00:12:29.730504 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2026-03-11 00:12:29.730521 | orchestrator | 2026-03-11 00:12:29.730535 | orchestrator | PLAY [Create operator user] **************************************************** 2026-03-11 00:12:29.730547 | orchestrator | 2026-03-11 00:12:29.730559 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:12:31.100889 | orchestrator | ok: [testbed-manager] 2026-03-11 00:12:31.100948 | orchestrator | 2026-03-11 00:12:31.100956 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-11 00:12:31.143171 | orchestrator | ok: [testbed-manager] 2026-03-11 00:12:31.143224 | orchestrator | 2026-03-11 00:12:31.143230 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-11 00:12:31.211734 | orchestrator | ok: [testbed-manager] 2026-03-11 00:12:31.211832 | orchestrator | 2026-03-11 00:12:31.211849 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-11 00:12:31.992381 | orchestrator | changed: [testbed-manager] 2026-03-11 00:12:31.992523 | orchestrator | 2026-03-11 00:12:31.992541 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-11 00:12:32.710093 | orchestrator | changed: [testbed-manager] 2026-03-11 00:12:32.710211 | orchestrator | 2026-03-11 00:12:32.710226 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-11 00:12:34.081128 | orchestrator | changed: [testbed-manager] => (item=adm) 2026-03-11 00:12:34.081270 | orchestrator | changed: [testbed-manager] => (item=sudo) 2026-03-11 00:12:34.081285 | orchestrator | 2026-03-11 00:12:34.081320 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-11 00:12:35.454300 | orchestrator | changed: [testbed-manager] 2026-03-11 00:12:35.454570 | orchestrator | 2026-03-11 00:12:35.454595 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-11 00:12:37.148811 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:12:37.148928 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2026-03-11 00:12:37.148943 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:12:37.148956 | orchestrator | 2026-03-11 00:12:37.148969 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-11 00:12:37.212686 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:12:37.212784 | orchestrator | 2026-03-11 00:12:37.212797 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-11 00:12:37.285152 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:12:37.285252 | orchestrator | 2026-03-11 00:12:37.285270 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-11 00:12:37.862484 | orchestrator | changed: [testbed-manager] 2026-03-11 00:12:37.862602 | orchestrator | 2026-03-11 00:12:37.862621 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-11 00:12:37.968299 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:12:37.968407 | orchestrator | 2026-03-11 00:12:37.968422 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-11 00:12:38.800051 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-11 00:12:38.800110 | orchestrator | changed: [testbed-manager] 2026-03-11 00:12:38.800119 | orchestrator | 2026-03-11 00:12:38.800127 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-11 00:12:38.842099 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:12:38.842153 | orchestrator | 2026-03-11 00:12:38.842163 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-11 00:12:38.878099 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:12:38.878210 | orchestrator | 2026-03-11 00:12:38.878228 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-11 00:12:38.915383 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:12:38.915516 | orchestrator | 2026-03-11 00:12:38.915538 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-11 00:12:38.989099 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:12:38.989159 | orchestrator | 2026-03-11 00:12:38.989170 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-11 00:12:39.712586 | orchestrator | ok: [testbed-manager] 2026-03-11 00:12:39.712681 | orchestrator | 2026-03-11 00:12:39.712697 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2026-03-11 00:12:39.712710 | orchestrator | 2026-03-11 00:12:39.712722 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:12:41.097850 | orchestrator | ok: [testbed-manager] 2026-03-11 00:12:41.097979 | orchestrator | 2026-03-11 00:12:41.098000 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2026-03-11 00:12:42.049429 | orchestrator | changed: [testbed-manager] 2026-03-11 00:12:42.049522 | orchestrator | 2026-03-11 00:12:42.049529 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:12:42.049536 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=14 rescued=0 ignored=0 2026-03-11 00:12:42.049541 | orchestrator | 2026-03-11 00:12:42.683935 | orchestrator | ok: Runtime: 0:08:02.011790 2026-03-11 00:12:42.693989 | 2026-03-11 00:12:42.694120 | TASK [Point out that the log in on the manager is now possible] 2026-03-11 00:12:42.724926 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2026-03-11 00:12:42.732261 | 2026-03-11 00:12:42.732395 | TASK [Point out that the following task takes some time and does not give any output] 2026-03-11 00:12:42.763634 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2026-03-11 00:12:42.770912 | 2026-03-11 00:12:42.771043 | TASK [Run manager part 1 + 2] 2026-03-11 00:12:43.686550 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2026-03-11 00:12:43.744232 | orchestrator | 2026-03-11 00:12:43.744281 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2026-03-11 00:12:43.744289 | orchestrator | 2026-03-11 00:12:43.744301 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:12:46.720719 | orchestrator | ok: [testbed-manager] 2026-03-11 00:12:46.720817 | orchestrator | 2026-03-11 00:12:46.720876 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2026-03-11 00:12:46.754509 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:12:46.754598 | orchestrator | 2026-03-11 00:12:46.754627 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2026-03-11 00:12:46.806125 | orchestrator | ok: [testbed-manager] 2026-03-11 00:12:46.806213 | orchestrator | 2026-03-11 00:12:46.806229 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-11 00:12:46.863862 | orchestrator | ok: [testbed-manager] 2026-03-11 00:12:46.863954 | orchestrator | 2026-03-11 00:12:46.863972 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-11 00:12:46.936873 | orchestrator | ok: [testbed-manager] 2026-03-11 00:12:46.936974 | orchestrator | 2026-03-11 00:12:46.936991 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-11 00:12:46.995967 | orchestrator | ok: [testbed-manager] 2026-03-11 00:12:46.996053 | orchestrator | 2026-03-11 00:12:46.996070 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-11 00:12:47.045420 | orchestrator | included: /home/zuul-testbed03/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2026-03-11 00:12:47.045527 | orchestrator | 2026-03-11 00:12:47.045542 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-11 00:12:47.809895 | orchestrator | ok: [testbed-manager] 2026-03-11 00:12:47.809990 | orchestrator | 2026-03-11 00:12:47.810008 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-11 00:12:47.867300 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:12:47.867395 | orchestrator | 2026-03-11 00:12:47.867411 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-11 00:12:49.263882 | orchestrator | changed: [testbed-manager] 2026-03-11 00:12:49.263990 | orchestrator | 2026-03-11 00:12:49.264011 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-11 00:12:49.831025 | orchestrator | ok: [testbed-manager] 2026-03-11 00:12:49.831136 | orchestrator | 2026-03-11 00:12:49.831162 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-11 00:12:50.938448 | orchestrator | changed: [testbed-manager] 2026-03-11 00:12:50.938677 | orchestrator | 2026-03-11 00:12:50.938694 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-11 00:13:06.116826 | orchestrator | changed: [testbed-manager] 2026-03-11 00:13:06.116884 | orchestrator | 2026-03-11 00:13:06.116890 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2026-03-11 00:13:06.737264 | orchestrator | ok: [testbed-manager] 2026-03-11 00:13:06.855971 | orchestrator | 2026-03-11 00:13:06.856007 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2026-03-11 00:13:06.856022 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:13:06.856027 | orchestrator | 2026-03-11 00:13:06.856031 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2026-03-11 00:13:07.724691 | orchestrator | changed: [testbed-manager] 2026-03-11 00:13:07.724739 | orchestrator | 2026-03-11 00:13:07.724748 | orchestrator | TASK [Copy SSH private key] **************************************************** 2026-03-11 00:13:08.693135 | orchestrator | changed: [testbed-manager] 2026-03-11 00:13:08.693185 | orchestrator | 2026-03-11 00:13:08.693195 | orchestrator | TASK [Create configuration directory] ****************************************** 2026-03-11 00:13:09.276978 | orchestrator | changed: [testbed-manager] 2026-03-11 00:13:09.277019 | orchestrator | 2026-03-11 00:13:09.277025 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2026-03-11 00:13:09.319215 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2026-03-11 00:13:09.319327 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2026-03-11 00:13:09.319343 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2026-03-11 00:13:09.319356 | orchestrator | deprecation_warnings=False in ansible.cfg. 2026-03-11 00:13:11.334208 | orchestrator | changed: [testbed-manager] 2026-03-11 00:13:11.334309 | orchestrator | 2026-03-11 00:13:11.334325 | orchestrator | TASK [Install python requirements in venv] ************************************* 2026-03-11 00:13:19.887777 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2026-03-11 00:13:19.887828 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2026-03-11 00:13:19.887838 | orchestrator | ok: [testbed-manager] => (item=packaging) 2026-03-11 00:13:19.887845 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2026-03-11 00:13:19.887856 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2026-03-11 00:13:19.887863 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2026-03-11 00:13:19.887870 | orchestrator | 2026-03-11 00:13:19.887877 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2026-03-11 00:13:20.907015 | orchestrator | changed: [testbed-manager] 2026-03-11 00:13:20.907090 | orchestrator | 2026-03-11 00:13:20.907106 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2026-03-11 00:13:20.957225 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:13:20.957308 | orchestrator | 2026-03-11 00:13:20.957326 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2026-03-11 00:13:23.904478 | orchestrator | changed: [testbed-manager] 2026-03-11 00:13:23.904578 | orchestrator | 2026-03-11 00:13:23.904597 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2026-03-11 00:13:23.940942 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:13:23.941033 | orchestrator | 2026-03-11 00:13:23.941051 | orchestrator | TASK [Run manager part 2] ****************************************************** 2026-03-11 00:14:57.275703 | orchestrator | changed: [testbed-manager] 2026-03-11 00:14:57.275798 | orchestrator | 2026-03-11 00:14:57.275816 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-11 00:14:58.353784 | orchestrator | ok: [testbed-manager] 2026-03-11 00:14:58.353900 | orchestrator | 2026-03-11 00:14:58.353927 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:14:58.353948 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2026-03-11 00:14:58.353967 | orchestrator | 2026-03-11 00:14:58.941723 | orchestrator | ok: Runtime: 0:02:15.370657 2026-03-11 00:14:58.959363 | 2026-03-11 00:14:58.959577 | TASK [Reboot manager] 2026-03-11 00:15:00.498008 | orchestrator | ok: Runtime: 0:00:00.930477 2026-03-11 00:15:00.514716 | 2026-03-11 00:15:00.514922 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2026-03-11 00:15:16.956531 | orchestrator | ok 2026-03-11 00:15:16.972006 | 2026-03-11 00:15:16.972225 | TASK [Wait a little longer for the manager so that everything is ready] 2026-03-11 00:16:17.020425 | orchestrator | ok 2026-03-11 00:16:17.031782 | 2026-03-11 00:16:17.031929 | TASK [Deploy manager + bootstrap nodes] 2026-03-11 00:16:20.886513 | orchestrator | 2026-03-11 00:16:20.886700 | orchestrator | # DEPLOY MANAGER 2026-03-11 00:16:20.886726 | orchestrator | 2026-03-11 00:16:20.886741 | orchestrator | + set -e 2026-03-11 00:16:20.886755 | orchestrator | + echo 2026-03-11 00:16:20.886769 | orchestrator | + echo '# DEPLOY MANAGER' 2026-03-11 00:16:20.886787 | orchestrator | + echo 2026-03-11 00:16:20.886835 | orchestrator | + cat /opt/manager-vars.sh 2026-03-11 00:16:20.890219 | orchestrator | export NUMBER_OF_NODES=6 2026-03-11 00:16:20.890260 | orchestrator | 2026-03-11 00:16:20.890276 | orchestrator | export CEPH_VERSION=reef 2026-03-11 00:16:20.890291 | orchestrator | export CONFIGURATION_VERSION=main 2026-03-11 00:16:20.890306 | orchestrator | export MANAGER_VERSION=latest 2026-03-11 00:16:20.890329 | orchestrator | export OPENSTACK_VERSION=2024.2 2026-03-11 00:16:20.890340 | orchestrator | 2026-03-11 00:16:20.890358 | orchestrator | export ARA=false 2026-03-11 00:16:20.890370 | orchestrator | export DEPLOY_MODE=manager 2026-03-11 00:16:20.890387 | orchestrator | export TEMPEST=true 2026-03-11 00:16:20.890399 | orchestrator | export IS_ZUUL=true 2026-03-11 00:16:20.890410 | orchestrator | 2026-03-11 00:16:20.890428 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.142 2026-03-11 00:16:20.890440 | orchestrator | export EXTERNAL_API=false 2026-03-11 00:16:20.890451 | orchestrator | 2026-03-11 00:16:20.890462 | orchestrator | export IMAGE_USER=ubuntu 2026-03-11 00:16:20.890539 | orchestrator | export IMAGE_NODE_USER=ubuntu 2026-03-11 00:16:20.890552 | orchestrator | 2026-03-11 00:16:20.890563 | orchestrator | export CEPH_STACK=ceph-ansible 2026-03-11 00:16:20.890582 | orchestrator | 2026-03-11 00:16:20.890593 | orchestrator | + echo 2026-03-11 00:16:20.890606 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-11 00:16:20.891721 | orchestrator | ++ export INTERACTIVE=false 2026-03-11 00:16:20.891745 | orchestrator | ++ INTERACTIVE=false 2026-03-11 00:16:20.891758 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-11 00:16:20.891771 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-11 00:16:20.891997 | orchestrator | + source /opt/manager-vars.sh 2026-03-11 00:16:20.892016 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-11 00:16:20.892028 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-11 00:16:20.892057 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-11 00:16:20.892068 | orchestrator | ++ CEPH_VERSION=reef 2026-03-11 00:16:20.892085 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-11 00:16:20.892123 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-11 00:16:20.892136 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-11 00:16:20.892147 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-11 00:16:20.892158 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-11 00:16:20.892179 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-11 00:16:20.892213 | orchestrator | ++ export ARA=false 2026-03-11 00:16:20.892225 | orchestrator | ++ ARA=false 2026-03-11 00:16:20.892236 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-11 00:16:20.892251 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-11 00:16:20.892262 | orchestrator | ++ export TEMPEST=true 2026-03-11 00:16:20.892273 | orchestrator | ++ TEMPEST=true 2026-03-11 00:16:20.892284 | orchestrator | ++ export IS_ZUUL=true 2026-03-11 00:16:20.892317 | orchestrator | ++ IS_ZUUL=true 2026-03-11 00:16:20.892328 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.142 2026-03-11 00:16:20.892339 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.142 2026-03-11 00:16:20.892350 | orchestrator | ++ export EXTERNAL_API=false 2026-03-11 00:16:20.892361 | orchestrator | ++ EXTERNAL_API=false 2026-03-11 00:16:20.892372 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-11 00:16:20.892383 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-11 00:16:20.892394 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-11 00:16:20.892409 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-11 00:16:20.892421 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-11 00:16:20.892432 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-11 00:16:20.892443 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2026-03-11 00:16:20.942948 | orchestrator | + docker version 2026-03-11 00:16:21.069827 | orchestrator | Client: Docker Engine - Community 2026-03-11 00:16:21.069914 | orchestrator | Version: 27.5.1 2026-03-11 00:16:21.069927 | orchestrator | API version: 1.47 2026-03-11 00:16:21.069938 | orchestrator | Go version: go1.22.11 2026-03-11 00:16:21.069947 | orchestrator | Git commit: 9f9e405 2026-03-11 00:16:21.069955 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-11 00:16:21.069964 | orchestrator | OS/Arch: linux/amd64 2026-03-11 00:16:21.069972 | orchestrator | Context: default 2026-03-11 00:16:21.069980 | orchestrator | 2026-03-11 00:16:21.069989 | orchestrator | Server: Docker Engine - Community 2026-03-11 00:16:21.069997 | orchestrator | Engine: 2026-03-11 00:16:21.070005 | orchestrator | Version: 27.5.1 2026-03-11 00:16:21.070081 | orchestrator | API version: 1.47 (minimum version 1.24) 2026-03-11 00:16:21.070132 | orchestrator | Go version: go1.22.11 2026-03-11 00:16:21.070140 | orchestrator | Git commit: 4c9b3b0 2026-03-11 00:16:21.070149 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2026-03-11 00:16:21.070156 | orchestrator | OS/Arch: linux/amd64 2026-03-11 00:16:21.070164 | orchestrator | Experimental: false 2026-03-11 00:16:21.070172 | orchestrator | containerd: 2026-03-11 00:16:21.070180 | orchestrator | Version: v2.2.1 2026-03-11 00:16:21.070188 | orchestrator | GitCommit: dea7da592f5d1d2b7755e3a161be07f43fad8f75 2026-03-11 00:16:21.070197 | orchestrator | runc: 2026-03-11 00:16:21.070205 | orchestrator | Version: 1.3.4 2026-03-11 00:16:21.070213 | orchestrator | GitCommit: v1.3.4-0-gd6d73eb8 2026-03-11 00:16:21.070221 | orchestrator | docker-init: 2026-03-11 00:16:21.070228 | orchestrator | Version: 0.19.0 2026-03-11 00:16:21.070237 | orchestrator | GitCommit: de40ad0 2026-03-11 00:16:21.072697 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2026-03-11 00:16:21.082401 | orchestrator | + set -e 2026-03-11 00:16:21.082447 | orchestrator | + source /opt/manager-vars.sh 2026-03-11 00:16:21.082459 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-11 00:16:21.082471 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-11 00:16:21.082482 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-11 00:16:21.082493 | orchestrator | ++ CEPH_VERSION=reef 2026-03-11 00:16:21.082504 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-11 00:16:21.082515 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-11 00:16:21.082568 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-11 00:16:21.082590 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-11 00:16:21.082602 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-11 00:16:21.082613 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-11 00:16:21.082637 | orchestrator | ++ export ARA=false 2026-03-11 00:16:21.082648 | orchestrator | ++ ARA=false 2026-03-11 00:16:21.082659 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-11 00:16:21.082670 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-11 00:16:21.082681 | orchestrator | ++ export TEMPEST=true 2026-03-11 00:16:21.082691 | orchestrator | ++ TEMPEST=true 2026-03-11 00:16:21.082702 | orchestrator | ++ export IS_ZUUL=true 2026-03-11 00:16:21.082713 | orchestrator | ++ IS_ZUUL=true 2026-03-11 00:16:21.082723 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.142 2026-03-11 00:16:21.082734 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.142 2026-03-11 00:16:21.082749 | orchestrator | ++ export EXTERNAL_API=false 2026-03-11 00:16:21.082760 | orchestrator | ++ EXTERNAL_API=false 2026-03-11 00:16:21.082771 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-11 00:16:21.082781 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-11 00:16:21.082792 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-11 00:16:21.082802 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-11 00:16:21.082814 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-11 00:16:21.082824 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-11 00:16:21.082835 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-11 00:16:21.082845 | orchestrator | ++ export INTERACTIVE=false 2026-03-11 00:16:21.082856 | orchestrator | ++ INTERACTIVE=false 2026-03-11 00:16:21.082867 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-11 00:16:21.082882 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-11 00:16:21.083187 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-11 00:16:21.083206 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-11 00:16:21.083217 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2026-03-11 00:16:21.088376 | orchestrator | + set -e 2026-03-11 00:16:21.088403 | orchestrator | + VERSION=reef 2026-03-11 00:16:21.089719 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-11 00:16:21.093495 | orchestrator | + [[ -n ceph_version: reef ]] 2026-03-11 00:16:21.093522 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2026-03-11 00:16:21.098908 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2026-03-11 00:16:21.105845 | orchestrator | + set -e 2026-03-11 00:16:21.106325 | orchestrator | + VERSION=2024.2 2026-03-11 00:16:21.107090 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2026-03-11 00:16:21.111558 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2026-03-11 00:16:21.111585 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2026-03-11 00:16:21.114199 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2026-03-11 00:16:21.115144 | orchestrator | ++ semver latest 7.0.0 2026-03-11 00:16:21.167970 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-11 00:16:21.168079 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-11 00:16:21.168098 | orchestrator | + echo 'enable_osism_kubernetes: true' 2026-03-11 00:16:21.168543 | orchestrator | ++ semver latest 10.0.0-0 2026-03-11 00:16:21.218370 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-11 00:16:21.219112 | orchestrator | ++ semver 2024.2 2025.1 2026-03-11 00:16:21.270104 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-11 00:16:21.270192 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2026-03-11 00:16:21.351344 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-11 00:16:21.353329 | orchestrator | + source /opt/venv/bin/activate 2026-03-11 00:16:21.354521 | orchestrator | ++ deactivate nondestructive 2026-03-11 00:16:21.354558 | orchestrator | ++ '[' -n '' ']' 2026-03-11 00:16:21.354573 | orchestrator | ++ '[' -n '' ']' 2026-03-11 00:16:21.355628 | orchestrator | ++ hash -r 2026-03-11 00:16:21.355654 | orchestrator | ++ '[' -n '' ']' 2026-03-11 00:16:21.355667 | orchestrator | ++ unset VIRTUAL_ENV 2026-03-11 00:16:21.355678 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2026-03-11 00:16:21.355692 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2026-03-11 00:16:21.355704 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2026-03-11 00:16:21.355715 | orchestrator | ++ '[' linux-gnu = msys ']' 2026-03-11 00:16:21.355726 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2026-03-11 00:16:21.355737 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2026-03-11 00:16:21.355755 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-11 00:16:21.355775 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-11 00:16:21.355792 | orchestrator | ++ export PATH 2026-03-11 00:16:21.355810 | orchestrator | ++ '[' -n '' ']' 2026-03-11 00:16:21.355827 | orchestrator | ++ '[' -z '' ']' 2026-03-11 00:16:21.355846 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2026-03-11 00:16:21.355864 | orchestrator | ++ PS1='(venv) ' 2026-03-11 00:16:21.355883 | orchestrator | ++ export PS1 2026-03-11 00:16:21.355899 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2026-03-11 00:16:21.355911 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2026-03-11 00:16:21.355922 | orchestrator | ++ hash -r 2026-03-11 00:16:21.355952 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2026-03-11 00:16:22.532486 | orchestrator | 2026-03-11 00:16:22.532589 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2026-03-11 00:16:22.532604 | orchestrator | 2026-03-11 00:16:22.532615 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-11 00:16:23.104663 | orchestrator | ok: [testbed-manager] 2026-03-11 00:16:23.104775 | orchestrator | 2026-03-11 00:16:23.104795 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-11 00:16:24.074188 | orchestrator | changed: [testbed-manager] 2026-03-11 00:16:24.074302 | orchestrator | 2026-03-11 00:16:24.074319 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2026-03-11 00:16:24.074331 | orchestrator | 2026-03-11 00:16:24.074341 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:16:26.132912 | orchestrator | ok: [testbed-manager] 2026-03-11 00:16:26.133016 | orchestrator | 2026-03-11 00:16:26.133061 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2026-03-11 00:16:26.183548 | orchestrator | ok: [testbed-manager] 2026-03-11 00:16:26.183649 | orchestrator | 2026-03-11 00:16:26.183668 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2026-03-11 00:16:26.586158 | orchestrator | changed: [testbed-manager] 2026-03-11 00:16:26.586269 | orchestrator | 2026-03-11 00:16:26.586297 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2026-03-11 00:16:26.633634 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:16:26.633755 | orchestrator | 2026-03-11 00:16:26.633773 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2026-03-11 00:16:26.945253 | orchestrator | changed: [testbed-manager] 2026-03-11 00:16:26.945346 | orchestrator | 2026-03-11 00:16:26.945363 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2026-03-11 00:16:27.243007 | orchestrator | ok: [testbed-manager] 2026-03-11 00:16:27.243159 | orchestrator | 2026-03-11 00:16:27.243182 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2026-03-11 00:16:27.341303 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:16:27.341429 | orchestrator | 2026-03-11 00:16:27.341456 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2026-03-11 00:16:27.341477 | orchestrator | 2026-03-11 00:16:27.341490 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:16:28.906461 | orchestrator | ok: [testbed-manager] 2026-03-11 00:16:28.906583 | orchestrator | 2026-03-11 00:16:28.906600 | orchestrator | TASK [Apply traefik role] ****************************************************** 2026-03-11 00:16:28.992508 | orchestrator | included: osism.services.traefik for testbed-manager 2026-03-11 00:16:28.992621 | orchestrator | 2026-03-11 00:16:28.992638 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2026-03-11 00:16:29.041179 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2026-03-11 00:16:29.041316 | orchestrator | 2026-03-11 00:16:29.041337 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2026-03-11 00:16:29.984308 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2026-03-11 00:16:29.984542 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2026-03-11 00:16:29.984572 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2026-03-11 00:16:29.984593 | orchestrator | 2026-03-11 00:16:29.984616 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2026-03-11 00:16:31.540427 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2026-03-11 00:16:31.540550 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2026-03-11 00:16:31.540565 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2026-03-11 00:16:31.540576 | orchestrator | 2026-03-11 00:16:31.540586 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2026-03-11 00:16:32.118864 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-11 00:16:32.119019 | orchestrator | changed: [testbed-manager] 2026-03-11 00:16:32.119074 | orchestrator | 2026-03-11 00:16:32.119099 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2026-03-11 00:16:32.680317 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-11 00:16:32.680441 | orchestrator | changed: [testbed-manager] 2026-03-11 00:16:32.680460 | orchestrator | 2026-03-11 00:16:32.680472 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2026-03-11 00:16:32.740653 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:16:32.740752 | orchestrator | 2026-03-11 00:16:32.740767 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2026-03-11 00:16:33.064400 | orchestrator | ok: [testbed-manager] 2026-03-11 00:16:33.064512 | orchestrator | 2026-03-11 00:16:33.064529 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2026-03-11 00:16:33.136724 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2026-03-11 00:16:33.136817 | orchestrator | 2026-03-11 00:16:33.136831 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2026-03-11 00:16:34.116382 | orchestrator | changed: [testbed-manager] 2026-03-11 00:16:34.116462 | orchestrator | 2026-03-11 00:16:34.116469 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2026-03-11 00:16:34.787447 | orchestrator | changed: [testbed-manager] 2026-03-11 00:16:34.787586 | orchestrator | 2026-03-11 00:16:34.787610 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2026-03-11 00:16:48.554346 | orchestrator | changed: [testbed-manager] 2026-03-11 00:16:48.554568 | orchestrator | 2026-03-11 00:16:48.554615 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2026-03-11 00:16:48.596494 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:16:48.596584 | orchestrator | 2026-03-11 00:16:48.596598 | orchestrator | PLAY [Deploy manager service] ************************************************** 2026-03-11 00:16:48.596610 | orchestrator | 2026-03-11 00:16:48.596622 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:16:50.317244 | orchestrator | ok: [testbed-manager] 2026-03-11 00:16:50.317347 | orchestrator | 2026-03-11 00:16:50.317393 | orchestrator | TASK [Apply manager role] ****************************************************** 2026-03-11 00:16:50.425989 | orchestrator | included: osism.services.manager for testbed-manager 2026-03-11 00:16:50.426191 | orchestrator | 2026-03-11 00:16:50.426207 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2026-03-11 00:16:50.484328 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2026-03-11 00:16:50.484423 | orchestrator | 2026-03-11 00:16:50.484438 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2026-03-11 00:16:53.716660 | orchestrator | ok: [testbed-manager] 2026-03-11 00:16:53.717674 | orchestrator | 2026-03-11 00:16:53.717707 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2026-03-11 00:16:53.768161 | orchestrator | ok: [testbed-manager] 2026-03-11 00:16:53.768274 | orchestrator | 2026-03-11 00:16:53.768301 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2026-03-11 00:16:53.883158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2026-03-11 00:16:53.883250 | orchestrator | 2026-03-11 00:16:53.883266 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2026-03-11 00:16:56.449785 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2026-03-11 00:16:56.449887 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2026-03-11 00:16:56.449901 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2026-03-11 00:16:56.449913 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2026-03-11 00:16:56.449925 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2026-03-11 00:16:56.449936 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2026-03-11 00:16:56.449948 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2026-03-11 00:16:56.449959 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2026-03-11 00:16:56.449971 | orchestrator | 2026-03-11 00:16:56.449983 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2026-03-11 00:16:57.018432 | orchestrator | changed: [testbed-manager] 2026-03-11 00:16:57.018527 | orchestrator | 2026-03-11 00:16:57.018540 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2026-03-11 00:16:57.586096 | orchestrator | changed: [testbed-manager] 2026-03-11 00:16:57.586203 | orchestrator | 2026-03-11 00:16:57.586222 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2026-03-11 00:16:57.653216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2026-03-11 00:16:57.653305 | orchestrator | 2026-03-11 00:16:57.653320 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2026-03-11 00:16:58.747238 | orchestrator | changed: [testbed-manager] => (item=ara) 2026-03-11 00:16:58.748041 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2026-03-11 00:16:58.748072 | orchestrator | 2026-03-11 00:16:58.748086 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2026-03-11 00:16:59.315837 | orchestrator | changed: [testbed-manager] 2026-03-11 00:16:59.315944 | orchestrator | 2026-03-11 00:16:59.315962 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2026-03-11 00:16:59.364318 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:16:59.364427 | orchestrator | 2026-03-11 00:16:59.364452 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2026-03-11 00:16:59.428371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2026-03-11 00:16:59.428470 | orchestrator | 2026-03-11 00:16:59.428486 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2026-03-11 00:16:59.988253 | orchestrator | changed: [testbed-manager] 2026-03-11 00:16:59.988346 | orchestrator | 2026-03-11 00:16:59.988359 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2026-03-11 00:17:00.043121 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2026-03-11 00:17:00.043248 | orchestrator | 2026-03-11 00:17:00.043264 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2026-03-11 00:17:01.358169 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-11 00:17:01.358276 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-11 00:17:01.358293 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:01.358307 | orchestrator | 2026-03-11 00:17:01.358319 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2026-03-11 00:17:01.976361 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:01.976465 | orchestrator | 2026-03-11 00:17:01.976482 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2026-03-11 00:17:02.037839 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:17:02.037937 | orchestrator | 2026-03-11 00:17:02.037976 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2026-03-11 00:17:02.132253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2026-03-11 00:17:02.132345 | orchestrator | 2026-03-11 00:17:02.132361 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2026-03-11 00:17:02.655717 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:02.655818 | orchestrator | 2026-03-11 00:17:02.655867 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2026-03-11 00:17:03.036316 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:03.036397 | orchestrator | 2026-03-11 00:17:03.036407 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2026-03-11 00:17:04.252851 | orchestrator | changed: [testbed-manager] => (item=conductor) 2026-03-11 00:17:04.252951 | orchestrator | changed: [testbed-manager] => (item=openstack) 2026-03-11 00:17:04.252968 | orchestrator | 2026-03-11 00:17:04.253018 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2026-03-11 00:17:04.866312 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:04.866420 | orchestrator | 2026-03-11 00:17:04.866436 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2026-03-11 00:17:05.217689 | orchestrator | ok: [testbed-manager] 2026-03-11 00:17:05.217796 | orchestrator | 2026-03-11 00:17:05.217815 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2026-03-11 00:17:05.574329 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:05.574456 | orchestrator | 2026-03-11 00:17:05.574480 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2026-03-11 00:17:05.619954 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:17:05.620073 | orchestrator | 2026-03-11 00:17:05.620087 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2026-03-11 00:17:05.693528 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2026-03-11 00:17:05.693648 | orchestrator | 2026-03-11 00:17:05.693672 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2026-03-11 00:17:05.735291 | orchestrator | ok: [testbed-manager] 2026-03-11 00:17:05.735380 | orchestrator | 2026-03-11 00:17:05.735394 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2026-03-11 00:17:07.702704 | orchestrator | changed: [testbed-manager] => (item=osism) 2026-03-11 00:17:07.702808 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2026-03-11 00:17:07.702824 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2026-03-11 00:17:07.702836 | orchestrator | 2026-03-11 00:17:07.702848 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2026-03-11 00:17:08.393406 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:08.393510 | orchestrator | 2026-03-11 00:17:08.393528 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2026-03-11 00:17:09.078365 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:09.078464 | orchestrator | 2026-03-11 00:17:09.078478 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2026-03-11 00:17:09.748876 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:09.749050 | orchestrator | 2026-03-11 00:17:09.749072 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2026-03-11 00:17:09.818189 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2026-03-11 00:17:09.818294 | orchestrator | 2026-03-11 00:17:09.818320 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2026-03-11 00:17:09.855851 | orchestrator | ok: [testbed-manager] 2026-03-11 00:17:09.855934 | orchestrator | 2026-03-11 00:17:09.855947 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2026-03-11 00:17:10.541487 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2026-03-11 00:17:10.541589 | orchestrator | 2026-03-11 00:17:10.541604 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2026-03-11 00:17:10.623733 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2026-03-11 00:17:10.623825 | orchestrator | 2026-03-11 00:17:10.623839 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2026-03-11 00:17:11.299460 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:11.299564 | orchestrator | 2026-03-11 00:17:11.299580 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2026-03-11 00:17:11.897309 | orchestrator | ok: [testbed-manager] 2026-03-11 00:17:11.897423 | orchestrator | 2026-03-11 00:17:11.897450 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2026-03-11 00:17:11.954184 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:17:11.954263 | orchestrator | 2026-03-11 00:17:11.954275 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2026-03-11 00:17:12.011696 | orchestrator | ok: [testbed-manager] 2026-03-11 00:17:12.011789 | orchestrator | 2026-03-11 00:17:12.011804 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2026-03-11 00:17:12.820849 | orchestrator | changed: [testbed-manager] 2026-03-11 00:17:12.820951 | orchestrator | 2026-03-11 00:17:12.821050 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2026-03-11 00:18:15.012347 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:15.012459 | orchestrator | 2026-03-11 00:18:15.012476 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2026-03-11 00:18:15.953561 | orchestrator | ok: [testbed-manager] 2026-03-11 00:18:15.953671 | orchestrator | 2026-03-11 00:18:15.953688 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2026-03-11 00:18:16.010332 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:18:16.010436 | orchestrator | 2026-03-11 00:18:16.010450 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2026-03-11 00:18:22.460885 | orchestrator | changed: [testbed-manager] 2026-03-11 00:18:22.460998 | orchestrator | 2026-03-11 00:18:22.461016 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2026-03-11 00:18:22.578440 | orchestrator | ok: [testbed-manager] 2026-03-11 00:18:22.578536 | orchestrator | 2026-03-11 00:18:22.578574 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-11 00:18:22.578587 | orchestrator | 2026-03-11 00:18:22.578599 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2026-03-11 00:18:22.623669 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:18:22.623777 | orchestrator | 2026-03-11 00:18:22.623793 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2026-03-11 00:19:22.679051 | orchestrator | Pausing for 60 seconds 2026-03-11 00:19:22.679806 | orchestrator | changed: [testbed-manager] 2026-03-11 00:19:22.679840 | orchestrator | 2026-03-11 00:19:22.679854 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2026-03-11 00:19:25.607800 | orchestrator | changed: [testbed-manager] 2026-03-11 00:19:25.607907 | orchestrator | 2026-03-11 00:19:25.607925 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2026-03-11 00:20:06.965233 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2026-03-11 00:20:06.965341 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2026-03-11 00:20:06.965356 | orchestrator | changed: [testbed-manager] 2026-03-11 00:20:06.965395 | orchestrator | 2026-03-11 00:20:06.965408 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2026-03-11 00:20:16.105635 | orchestrator | changed: [testbed-manager] 2026-03-11 00:20:16.105749 | orchestrator | 2026-03-11 00:20:16.105767 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2026-03-11 00:20:16.203994 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2026-03-11 00:20:16.204098 | orchestrator | 2026-03-11 00:20:16.204114 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2026-03-11 00:20:16.204128 | orchestrator | 2026-03-11 00:20:16.204140 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2026-03-11 00:20:16.250753 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:20:16.250846 | orchestrator | 2026-03-11 00:20:16.250860 | orchestrator | TASK [osism.services.manager : Include version verification tasks] ************* 2026-03-11 00:20:16.322743 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/verify-versions.yml for testbed-manager 2026-03-11 00:20:16.322850 | orchestrator | 2026-03-11 00:20:16.322873 | orchestrator | TASK [osism.services.manager : Deploy service manager version check script] **** 2026-03-11 00:20:16.990801 | orchestrator | changed: [testbed-manager] 2026-03-11 00:20:16.990905 | orchestrator | 2026-03-11 00:20:16.990923 | orchestrator | TASK [osism.services.manager : Execute service manager version check] ********** 2026-03-11 00:20:20.015564 | orchestrator | ok: [testbed-manager] 2026-03-11 00:20:20.015679 | orchestrator | 2026-03-11 00:20:20.015696 | orchestrator | TASK [osism.services.manager : Display version check results] ****************** 2026-03-11 00:20:20.089079 | orchestrator | ok: [testbed-manager] => { 2026-03-11 00:20:20.089175 | orchestrator | "version_check_result.stdout_lines": [ 2026-03-11 00:20:20.089191 | orchestrator | "=== OSISM Container Version Check ===", 2026-03-11 00:20:20.089203 | orchestrator | "Checking running containers against expected versions...", 2026-03-11 00:20:20.089216 | orchestrator | "", 2026-03-11 00:20:20.089232 | orchestrator | "Checking service: inventory_reconciler (Inventory Reconciler Service)", 2026-03-11 00:20:20.089244 | orchestrator | " Expected: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-11 00:20:20.089257 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.089269 | orchestrator | " Running: registry.osism.tech/osism/inventory-reconciler:latest", 2026-03-11 00:20:20.089281 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.089293 | orchestrator | "", 2026-03-11 00:20:20.089305 | orchestrator | "Checking service: osism-ansible (OSISM Ansible Service)", 2026-03-11 00:20:20.089318 | orchestrator | " Expected: registry.osism.tech/osism/osism-ansible:latest", 2026-03-11 00:20:20.089329 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.089341 | orchestrator | " Running: registry.osism.tech/osism/osism-ansible:latest", 2026-03-11 00:20:20.089353 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.089365 | orchestrator | "", 2026-03-11 00:20:20.089377 | orchestrator | "Checking service: osism-kubernetes (Osism-Kubernetes Service)", 2026-03-11 00:20:20.089389 | orchestrator | " Expected: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-11 00:20:20.089401 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.089413 | orchestrator | " Running: registry.osism.tech/osism/osism-kubernetes:latest", 2026-03-11 00:20:20.089425 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.089436 | orchestrator | "", 2026-03-11 00:20:20.089448 | orchestrator | "Checking service: ceph-ansible (Ceph-Ansible Service)", 2026-03-11 00:20:20.089460 | orchestrator | " Expected: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-11 00:20:20.089566 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.089582 | orchestrator | " Running: registry.osism.tech/osism/ceph-ansible:reef", 2026-03-11 00:20:20.089593 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.089605 | orchestrator | "", 2026-03-11 00:20:20.089616 | orchestrator | "Checking service: kolla-ansible (Kolla-Ansible Service)", 2026-03-11 00:20:20.089627 | orchestrator | " Expected: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-11 00:20:20.089661 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.089672 | orchestrator | " Running: registry.osism.tech/osism/kolla-ansible:2024.2", 2026-03-11 00:20:20.089683 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.089694 | orchestrator | "", 2026-03-11 00:20:20.089705 | orchestrator | "Checking service: osismclient (OSISM Client)", 2026-03-11 00:20:20.089716 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-11 00:20:20.089727 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.089738 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-11 00:20:20.089749 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.089760 | orchestrator | "", 2026-03-11 00:20:20.089771 | orchestrator | "Checking service: ara-server (ARA Server)", 2026-03-11 00:20:20.089782 | orchestrator | " Expected: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-11 00:20:20.089793 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.089804 | orchestrator | " Running: registry.osism.tech/osism/ara-server:1.7.3", 2026-03-11 00:20:20.089814 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.089825 | orchestrator | "", 2026-03-11 00:20:20.089836 | orchestrator | "Checking service: mariadb (MariaDB for ARA)", 2026-03-11 00:20:20.089847 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-11 00:20:20.089858 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.089869 | orchestrator | " Running: registry.osism.tech/dockerhub/library/mariadb:11.8.4", 2026-03-11 00:20:20.089880 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.089890 | orchestrator | "", 2026-03-11 00:20:20.089909 | orchestrator | "Checking service: frontend (OSISM Frontend)", 2026-03-11 00:20:20.089920 | orchestrator | " Expected: registry.osism.tech/osism/osism-frontend:latest", 2026-03-11 00:20:20.089936 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.089947 | orchestrator | " Running: registry.osism.tech/osism/osism-frontend:latest", 2026-03-11 00:20:20.089959 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.089970 | orchestrator | "", 2026-03-11 00:20:20.089981 | orchestrator | "Checking service: redis (Redis Cache)", 2026-03-11 00:20:20.089992 | orchestrator | " Expected: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-11 00:20:20.090003 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.090014 | orchestrator | " Running: registry.osism.tech/dockerhub/library/redis:7.4.7-alpine", 2026-03-11 00:20:20.090085 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.090097 | orchestrator | "", 2026-03-11 00:20:20.090107 | orchestrator | "Checking service: api (OSISM API Service)", 2026-03-11 00:20:20.090118 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-11 00:20:20.090129 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.090140 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-11 00:20:20.090151 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.090162 | orchestrator | "", 2026-03-11 00:20:20.090172 | orchestrator | "Checking service: listener (OpenStack Event Listener)", 2026-03-11 00:20:20.090183 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-11 00:20:20.090194 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.090205 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-11 00:20:20.090216 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.090226 | orchestrator | "", 2026-03-11 00:20:20.090237 | orchestrator | "Checking service: openstack (OpenStack Integration)", 2026-03-11 00:20:20.090248 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-11 00:20:20.090258 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.090269 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-11 00:20:20.090280 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.090291 | orchestrator | "", 2026-03-11 00:20:20.090301 | orchestrator | "Checking service: beat (Celery Beat Scheduler)", 2026-03-11 00:20:20.090312 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-11 00:20:20.090323 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.090333 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-11 00:20:20.090355 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.090366 | orchestrator | "", 2026-03-11 00:20:20.090377 | orchestrator | "Checking service: flower (Celery Flower Monitor)", 2026-03-11 00:20:20.090406 | orchestrator | " Expected: registry.osism.tech/osism/osism:latest", 2026-03-11 00:20:20.090417 | orchestrator | " Enabled: true", 2026-03-11 00:20:20.090428 | orchestrator | " Running: registry.osism.tech/osism/osism:latest", 2026-03-11 00:20:20.090438 | orchestrator | " Status: ✅ MATCH", 2026-03-11 00:20:20.090449 | orchestrator | "", 2026-03-11 00:20:20.090460 | orchestrator | "=== Summary ===", 2026-03-11 00:20:20.090471 | orchestrator | "Errors (version mismatches): 0", 2026-03-11 00:20:20.090511 | orchestrator | "Warnings (expected containers not running): 0", 2026-03-11 00:20:20.090530 | orchestrator | "", 2026-03-11 00:20:20.090547 | orchestrator | "✅ All running containers match expected versions!" 2026-03-11 00:20:20.090565 | orchestrator | ] 2026-03-11 00:20:20.090584 | orchestrator | } 2026-03-11 00:20:20.090602 | orchestrator | 2026-03-11 00:20:20.090616 | orchestrator | TASK [osism.services.manager : Skip version check due to service configuration] *** 2026-03-11 00:20:20.143615 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:20:20.143724 | orchestrator | 2026-03-11 00:20:20.143749 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:20:20.143770 | orchestrator | testbed-manager : ok=70 changed=37 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2026-03-11 00:20:20.143788 | orchestrator | 2026-03-11 00:20:20.219580 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2026-03-11 00:20:20.219669 | orchestrator | + deactivate 2026-03-11 00:20:20.219685 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2026-03-11 00:20:20.219701 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2026-03-11 00:20:20.219712 | orchestrator | + export PATH 2026-03-11 00:20:20.219723 | orchestrator | + unset _OLD_VIRTUAL_PATH 2026-03-11 00:20:20.219734 | orchestrator | + '[' -n '' ']' 2026-03-11 00:20:20.219745 | orchestrator | + hash -r 2026-03-11 00:20:20.219755 | orchestrator | + '[' -n '' ']' 2026-03-11 00:20:20.219765 | orchestrator | + unset VIRTUAL_ENV 2026-03-11 00:20:20.219775 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2026-03-11 00:20:20.219786 | orchestrator | + '[' '!' '' = nondestructive ']' 2026-03-11 00:20:20.219796 | orchestrator | + unset -f deactivate 2026-03-11 00:20:20.219807 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2026-03-11 00:20:20.227223 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-11 00:20:20.227269 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-11 00:20:20.227282 | orchestrator | + local max_attempts=60 2026-03-11 00:20:20.227293 | orchestrator | + local name=ceph-ansible 2026-03-11 00:20:20.227304 | orchestrator | + local attempt_num=1 2026-03-11 00:20:20.227984 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:20:20.264971 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:20:20.265050 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-11 00:20:20.265064 | orchestrator | + local max_attempts=60 2026-03-11 00:20:20.265076 | orchestrator | + local name=kolla-ansible 2026-03-11 00:20:20.265636 | orchestrator | + local attempt_num=1 2026-03-11 00:20:20.265939 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-11 00:20:20.297314 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:20:20.297403 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-11 00:20:20.297429 | orchestrator | + local max_attempts=60 2026-03-11 00:20:20.297450 | orchestrator | + local name=osism-ansible 2026-03-11 00:20:20.297470 | orchestrator | + local attempt_num=1 2026-03-11 00:20:20.298255 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-11 00:20:20.334325 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:20:20.334407 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-11 00:20:20.334420 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-11 00:20:20.932403 | orchestrator | + docker compose --project-directory /opt/manager ps 2026-03-11 00:20:21.099797 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2026-03-11 00:20:21.099952 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2026-03-11 00:20:21.099982 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2026-03-11 00:20:21.100002 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2026-03-11 00:20:21.100026 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2026-03-11 00:20:21.100039 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2026-03-11 00:20:21.100050 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2026-03-11 00:20:21.100061 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 55 seconds (healthy) 2026-03-11 00:20:21.100089 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2026-03-11 00:20:21.100101 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.4 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2026-03-11 00:20:21.100111 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2026-03-11 00:20:21.100122 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.7-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2026-03-11 00:20:21.100133 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2026-03-11 00:20:21.100144 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2026-03-11 00:20:21.100155 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2026-03-11 00:20:21.100165 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2026-03-11 00:20:21.104117 | orchestrator | ++ semver latest 7.0.0 2026-03-11 00:20:21.151959 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-11 00:20:21.152052 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-11 00:20:21.152065 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2026-03-11 00:20:21.154187 | orchestrator | + osism apply resolvconf -l testbed-manager 2026-03-11 00:20:33.087937 | orchestrator | 2026-03-11 00:20:33 | INFO  | Prepare task for execution of resolvconf. 2026-03-11 00:20:33.310170 | orchestrator | 2026-03-11 00:20:33 | INFO  | Task 22f426fe-71d8-43a2-bad7-c887235a08af (resolvconf) was prepared for execution. 2026-03-11 00:20:33.310269 | orchestrator | 2026-03-11 00:20:33 | INFO  | It takes a moment until task 22f426fe-71d8-43a2-bad7-c887235a08af (resolvconf) has been started and output is visible here. 2026-03-11 00:20:46.620747 | orchestrator | 2026-03-11 00:20:46.620904 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2026-03-11 00:20:46.620925 | orchestrator | 2026-03-11 00:20:46.620938 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:20:46.620950 | orchestrator | Wednesday 11 March 2026 00:20:37 +0000 (0:00:00.124) 0:00:00.124 ******* 2026-03-11 00:20:46.620961 | orchestrator | ok: [testbed-manager] 2026-03-11 00:20:46.620973 | orchestrator | 2026-03-11 00:20:46.620984 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-11 00:20:46.620996 | orchestrator | Wednesday 11 March 2026 00:20:40 +0000 (0:00:03.532) 0:00:03.656 ******* 2026-03-11 00:20:46.621007 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:20:46.621019 | orchestrator | 2026-03-11 00:20:46.621030 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-11 00:20:46.621041 | orchestrator | Wednesday 11 March 2026 00:20:40 +0000 (0:00:00.067) 0:00:03.724 ******* 2026-03-11 00:20:46.621052 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2026-03-11 00:20:46.621065 | orchestrator | 2026-03-11 00:20:46.621076 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-11 00:20:46.621087 | orchestrator | Wednesday 11 March 2026 00:20:40 +0000 (0:00:00.080) 0:00:03.804 ******* 2026-03-11 00:20:46.621110 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2026-03-11 00:20:46.621122 | orchestrator | 2026-03-11 00:20:46.621133 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-11 00:20:46.621144 | orchestrator | Wednesday 11 March 2026 00:20:41 +0000 (0:00:00.090) 0:00:03.894 ******* 2026-03-11 00:20:46.621155 | orchestrator | ok: [testbed-manager] 2026-03-11 00:20:46.621166 | orchestrator | 2026-03-11 00:20:46.621177 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-11 00:20:46.621203 | orchestrator | Wednesday 11 March 2026 00:20:42 +0000 (0:00:01.097) 0:00:04.992 ******* 2026-03-11 00:20:46.621217 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:20:46.621230 | orchestrator | 2026-03-11 00:20:46.621242 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-11 00:20:46.621255 | orchestrator | Wednesday 11 March 2026 00:20:42 +0000 (0:00:00.055) 0:00:05.048 ******* 2026-03-11 00:20:46.621267 | orchestrator | ok: [testbed-manager] 2026-03-11 00:20:46.621280 | orchestrator | 2026-03-11 00:20:46.621292 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-11 00:20:46.621305 | orchestrator | Wednesday 11 March 2026 00:20:42 +0000 (0:00:00.504) 0:00:05.552 ******* 2026-03-11 00:20:46.621318 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:20:46.621330 | orchestrator | 2026-03-11 00:20:46.621343 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-11 00:20:46.621369 | orchestrator | Wednesday 11 March 2026 00:20:42 +0000 (0:00:00.078) 0:00:05.630 ******* 2026-03-11 00:20:46.621380 | orchestrator | changed: [testbed-manager] 2026-03-11 00:20:46.621391 | orchestrator | 2026-03-11 00:20:46.621402 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-11 00:20:46.621413 | orchestrator | Wednesday 11 March 2026 00:20:43 +0000 (0:00:00.579) 0:00:06.210 ******* 2026-03-11 00:20:46.621443 | orchestrator | changed: [testbed-manager] 2026-03-11 00:20:46.621454 | orchestrator | 2026-03-11 00:20:46.621465 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-11 00:20:46.621476 | orchestrator | Wednesday 11 March 2026 00:20:44 +0000 (0:00:01.014) 0:00:07.224 ******* 2026-03-11 00:20:46.621487 | orchestrator | ok: [testbed-manager] 2026-03-11 00:20:46.621498 | orchestrator | 2026-03-11 00:20:46.621530 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-11 00:20:46.621542 | orchestrator | Wednesday 11 March 2026 00:20:45 +0000 (0:00:00.921) 0:00:08.146 ******* 2026-03-11 00:20:46.621553 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2026-03-11 00:20:46.621564 | orchestrator | 2026-03-11 00:20:46.621575 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-11 00:20:46.621586 | orchestrator | Wednesday 11 March 2026 00:20:45 +0000 (0:00:00.086) 0:00:08.232 ******* 2026-03-11 00:20:46.621597 | orchestrator | changed: [testbed-manager] 2026-03-11 00:20:46.621607 | orchestrator | 2026-03-11 00:20:46.621618 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:20:46.621631 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:20:46.621642 | orchestrator | 2026-03-11 00:20:46.621653 | orchestrator | 2026-03-11 00:20:46.621664 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:20:46.621674 | orchestrator | Wednesday 11 March 2026 00:20:46 +0000 (0:00:01.080) 0:00:09.312 ******* 2026-03-11 00:20:46.621685 | orchestrator | =============================================================================== 2026-03-11 00:20:46.621696 | orchestrator | Gathering Facts --------------------------------------------------------- 3.53s 2026-03-11 00:20:46.621707 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.10s 2026-03-11 00:20:46.621718 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.08s 2026-03-11 00:20:46.621728 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.01s 2026-03-11 00:20:46.621739 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.92s 2026-03-11 00:20:46.621750 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.58s 2026-03-11 00:20:46.621780 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.50s 2026-03-11 00:20:46.621792 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2026-03-11 00:20:46.621803 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.09s 2026-03-11 00:20:46.621814 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2026-03-11 00:20:46.621824 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2026-03-11 00:20:46.621835 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2026-03-11 00:20:46.621846 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2026-03-11 00:20:46.796727 | orchestrator | + osism apply sshconfig 2026-03-11 00:20:58.568480 | orchestrator | 2026-03-11 00:20:58 | INFO  | Prepare task for execution of sshconfig. 2026-03-11 00:20:58.626874 | orchestrator | 2026-03-11 00:20:58 | INFO  | Task 568e2393-1093-4de2-aff1-4ea848787f35 (sshconfig) was prepared for execution. 2026-03-11 00:20:58.626974 | orchestrator | 2026-03-11 00:20:58 | INFO  | It takes a moment until task 568e2393-1093-4de2-aff1-4ea848787f35 (sshconfig) has been started and output is visible here. 2026-03-11 00:21:09.741249 | orchestrator | 2026-03-11 00:21:09.741419 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2026-03-11 00:21:09.741440 | orchestrator | 2026-03-11 00:21:09.741453 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2026-03-11 00:21:09.741464 | orchestrator | Wednesday 11 March 2026 00:21:02 +0000 (0:00:00.121) 0:00:00.121 ******* 2026-03-11 00:21:09.741475 | orchestrator | ok: [testbed-manager] 2026-03-11 00:21:09.741488 | orchestrator | 2026-03-11 00:21:09.741503 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2026-03-11 00:21:09.741520 | orchestrator | Wednesday 11 March 2026 00:21:03 +0000 (0:00:01.502) 0:00:01.624 ******* 2026-03-11 00:21:09.741576 | orchestrator | changed: [testbed-manager] 2026-03-11 00:21:09.741590 | orchestrator | 2026-03-11 00:21:09.741601 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2026-03-11 00:21:09.741612 | orchestrator | Wednesday 11 March 2026 00:21:04 +0000 (0:00:00.466) 0:00:02.091 ******* 2026-03-11 00:21:09.741623 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2026-03-11 00:21:09.741634 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2026-03-11 00:21:09.741645 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2026-03-11 00:21:09.741656 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2026-03-11 00:21:09.741666 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2026-03-11 00:21:09.741677 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2026-03-11 00:21:09.741687 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2026-03-11 00:21:09.741698 | orchestrator | 2026-03-11 00:21:09.741709 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2026-03-11 00:21:09.741719 | orchestrator | Wednesday 11 March 2026 00:21:09 +0000 (0:00:04.917) 0:00:07.008 ******* 2026-03-11 00:21:09.741730 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:21:09.741741 | orchestrator | 2026-03-11 00:21:09.741752 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2026-03-11 00:21:09.741763 | orchestrator | Wednesday 11 March 2026 00:21:09 +0000 (0:00:00.061) 0:00:07.070 ******* 2026-03-11 00:21:09.741773 | orchestrator | changed: [testbed-manager] 2026-03-11 00:21:09.741784 | orchestrator | 2026-03-11 00:21:09.741795 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:21:09.741807 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:21:09.741819 | orchestrator | 2026-03-11 00:21:09.741830 | orchestrator | 2026-03-11 00:21:09.741841 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:21:09.741852 | orchestrator | Wednesday 11 March 2026 00:21:09 +0000 (0:00:00.497) 0:00:07.567 ******* 2026-03-11 00:21:09.741863 | orchestrator | =============================================================================== 2026-03-11 00:21:09.741874 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 4.92s 2026-03-11 00:21:09.741884 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 1.50s 2026-03-11 00:21:09.741895 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.50s 2026-03-11 00:21:09.741906 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.47s 2026-03-11 00:21:09.741916 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.06s 2026-03-11 00:21:09.934586 | orchestrator | + osism apply known-hosts 2026-03-11 00:21:21.647226 | orchestrator | 2026-03-11 00:21:21 | INFO  | Prepare task for execution of known-hosts. 2026-03-11 00:21:21.714296 | orchestrator | 2026-03-11 00:21:21 | INFO  | Task 76865220-f063-4ef4-9ae8-6bd60c7e9796 (known-hosts) was prepared for execution. 2026-03-11 00:21:21.714425 | orchestrator | 2026-03-11 00:21:21 | INFO  | It takes a moment until task 76865220-f063-4ef4-9ae8-6bd60c7e9796 (known-hosts) has been started and output is visible here. 2026-03-11 00:21:37.514719 | orchestrator | 2026-03-11 00:21:37.514824 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2026-03-11 00:21:37.514840 | orchestrator | 2026-03-11 00:21:37.514852 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2026-03-11 00:21:37.514864 | orchestrator | Wednesday 11 March 2026 00:21:25 +0000 (0:00:00.162) 0:00:00.162 ******* 2026-03-11 00:21:37.514876 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-11 00:21:37.514887 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-11 00:21:37.514898 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-11 00:21:37.514928 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-11 00:21:37.514939 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-11 00:21:37.514950 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-11 00:21:37.514961 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-11 00:21:37.514971 | orchestrator | 2026-03-11 00:21:37.514982 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2026-03-11 00:21:37.514993 | orchestrator | Wednesday 11 March 2026 00:21:31 +0000 (0:00:05.939) 0:00:06.101 ******* 2026-03-11 00:21:37.515013 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-11 00:21:37.515028 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-11 00:21:37.515039 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-11 00:21:37.515049 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-11 00:21:37.515060 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-11 00:21:37.515071 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-11 00:21:37.515081 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-11 00:21:37.515092 | orchestrator | 2026-03-11 00:21:37.515102 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:37.515113 | orchestrator | Wednesday 11 March 2026 00:21:32 +0000 (0:00:00.175) 0:00:06.277 ******* 2026-03-11 00:21:37.515128 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpZtCtpY6d/WSXi2+PvvGhlZPID3eURYK4niJjcvscObjH80BLK0MFgSZceWTySDHyOzClAR9TLziLRYxU3LUrAxUABJcOegGZbMgMKsEY+wbnjKjRACrJKcVpnbsA7crFM1HWOM++rVp77d0Ox4ywoDFlMZS5IyUn51rQJL1GAXhOqfiGmNCMNVlWyRyt0VpxdpPZqbJf9oVKN47DvlxCFI+XdIdoBUocSM1PG1qr3XEuFEGjrTW8REReFq0595mnZ8RiTQX1SdgxGLILE6iuPv0gEMFiHLO2MwT2ZEyKYsT1d3QbJkR/l1URJ18AFsuYAY43+mtMmZ7uP8+rQyDk40Ba5ZPvlNhdWTD19bEJJyRaPRPjUeP4JlndQwd9lZ1gKOee7SyChGkdBWvcPw5L9ejpTEoEaFH3sktPx2hL3dCQNvuQfuLvIqNX5H2t9MC+GI3YvsHIOaAClmtL8LA+D/rODMcE0KjTsH2ONanDMmk1JzsvaXv3xrk8E7XCV2c=) 2026-03-11 00:21:37.515142 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMcGhvFsiYojaVH7j3cFCPOlvpU+s8CQ6aKyUvyImU3iYLQQtBnom9o1vrltG1ErKPjNpDVfwsoTDClhSrDR8us=) 2026-03-11 00:21:37.515155 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB1XkErJCxIQiCHD+e/yuKipeyJD8eMgUGEManCH5M2E) 2026-03-11 00:21:37.515168 | orchestrator | 2026-03-11 00:21:37.515179 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:37.515190 | orchestrator | Wednesday 11 March 2026 00:21:33 +0000 (0:00:01.130) 0:00:07.407 ******* 2026-03-11 00:21:37.515222 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBy0ZfBO2xtD0xnUmmSxK0c1REbUU9MrNkbhfsqu3cXwQuofB3daDkjlvZKF1Ttp6mfuZeUk/8DCzaYhLEbjMsFpPHTLjBKijtlVqKP8l/NUk7So90DiDtZqLaGpEX7dGf0fPDd9UYxgpb7LuEhIwmMnFuI8yFyX/R74GWvnSbkRdoiIvS1tRVGB9aCxtfKSryGLSAK4MEBqckuZcQGWWGYO4DYglShQ7u57lz6T3w2SJaRiPBRQMGi8hB4dpA+0XynmHSEId6LdU7mm4L3ObceXDABbh5CZBDZdfv4HtKHZYdWnMhwKEReAMMyrCoDEO1SbRXEI220kTgmogymOOT3SqhTEU5S3maaFzCubFjNBlHDnLM4EpSURrhTRLz8RcjHuf3Li/TuJYK/VDkEXWLlx5ev9mV8tW026nLRSKGbPHTR1N1cOFCuYNyTJglG+a34OPSQudF8m7pCzm5sdJ3E2VUmusgVSjAPP4mvBgnf901+Kzxb1Xi8QcMesZel60=) 2026-03-11 00:21:37.515242 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKk4oEm3wEmTHz3jIpGW7BYgckh3zpKYF4fva1Q5U0lMLvNDzgSntPl0I8Vg2/W+vHbDKaznZ1Fy/eiJblIUfww=) 2026-03-11 00:21:37.515253 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBvinnGP4ZhfEcifMWq7fKrpBsQsxYcXr2WeV32NVZZV) 2026-03-11 00:21:37.515264 | orchestrator | 2026-03-11 00:21:37.515274 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:37.515285 | orchestrator | Wednesday 11 March 2026 00:21:34 +0000 (0:00:00.994) 0:00:08.402 ******* 2026-03-11 00:21:37.515296 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIu+8PKXZem7u3nH2UB2MQ3N8eSPN+uJRdNCgASabjlx) 2026-03-11 00:21:37.515307 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjZ3Xq1TfHNatOoq+i7RZhxTqO1zlID6+TfBaAeHVISjit8AsVMkKrqzZS84VvkLTk71MhlxO/VqsbTB8e2iT44z1FusHlR0y8pniyVx/0ZAvkg8KkSc1+NCc6fMUeTdCbcyeAIk7yTtHlf6PyboP7IZrTWEJYQI5q7LwbkfDkjx8la6AzUHn/zWgIaKhwRxuO1tT2uzBE+hfarMQ1Lksv4ARNewGE7zkx+y/0vHStcZH/gL3xl91WVbEhGKAk7Vs5xXnL/nDcJNO10b2N+JIgoZa8ia+5mwbzMqcNIb+ww4LKdJqwU6wfYtwniRZEq8k+c/hCpIjdvYxKW75c75ub8D94m31b7fXtF01vogg923aiaI3SkOsXKfA4ldBYj6V+vlgsiGCkv2pRmCyFuiAF+vYaW85we1lBLA0o2Xr567GBazxBUW7BnqfcnsMqYlH+PHmbvhbph3kz131m0p26OY9F2FYmAYFrgZhsJNDiZ1dwMt4R6QnVkkm/RwE8Jec=) 2026-03-11 00:21:37.515414 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJC61iATa9vAbVNr1L18On8cr4uS8LQ/J6XiCZ8Ivet+HT7HuV3HQqTAqX2sUmRNHcSQISsWav+pe5RPMMG4jOQ=) 2026-03-11 00:21:37.515427 | orchestrator | 2026-03-11 00:21:37.515438 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:37.515449 | orchestrator | Wednesday 11 March 2026 00:21:35 +0000 (0:00:00.991) 0:00:09.393 ******* 2026-03-11 00:21:37.515460 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNOAFU7rH2nMGLRJxGjosVwXicsciQ/GQHID4EVOj0Phv0pD1TeuTyVr6Oh+9V4XhX/ppdRe55f+rlIfEKt17No=) 2026-03-11 00:21:37.515471 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUQLEAxhd12v97jRg0KH45KL64LIYUpwhRSYBNLWGvvkkgu3Lg2WX/nRJnjwB33LVVyt2bmYJguNyj+v2sgpphx17zh+qlqPyqVAoLonnGlxI63JENw1vGFzu87eZGM9mToFLneYcPzaKUHvyB/T1WaQvdj+1InWzywMZMWeXv9jhtpu068Zeky8e7ditu08jx+DMEL82ao5QJ7zSxW/lpOo531r/5ISIIdv4dsOtNj8mBgFrHPlgoTKuXpnRBG0OCEShWMLZKU8jRnWLaesnqzXOx8y5kunYgZxaGv20OlkthXf7gthnCtCxdiwgX1TLOmzbDdBJSC6JO5XVju1GAO9ysk61yDVubTrvkj+ZLWv7uISr6TS2kLa7ToXtE89xtNGD6Nu3LfTklwm+R2SAPkRKyzOdg49Igq1mMuvvvkmzNKxw2/c3XwB+v5DU2ADhli1Dp52V55fFpdRCWAqJE3FSEEk/SvaisXRLppGUltd3tTWtd0n8mRsHNWIyd4lM=) 2026-03-11 00:21:37.515482 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEiqx/+RiyytM29CNTwYoSDzmUUnnoOiMf2ozFnpwUXp) 2026-03-11 00:21:37.515493 | orchestrator | 2026-03-11 00:21:37.515504 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:37.515514 | orchestrator | Wednesday 11 March 2026 00:21:36 +0000 (0:00:01.037) 0:00:10.431 ******* 2026-03-11 00:21:37.515525 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6QfFqK/81qgHOv8qKvnZyupuzdxMDDOHEtQ88laiBprhm7A3COfDqaLjAGcavt761zB+4kkKtoYd0MD8qur4m77yhZZz/tdEK7sMXD/CQh/E5HRKAbk2xNc/TkAGwyllBNS867g/kUIhC9bbpLos0mMFNuroW9Sb0Q+L+X/HbCumigHRYBsDw7kyWZ1TYsYkoNPByvIHG/nhvOJx3lcnEI3Iv2AmoIwoxZSMuWvl7fseaGM5iD3Av4FAWbo8zIKIFsgzCL5MOgsSSGliEKJQIt0vIB7YXB4M6VcBvr5lZ7WF1oQMS4FsYurjQFxYmm4/XMUooz9ZfAOmg0WnvXIf0/voAsUOf9El7Zn2q1pFO5L6+J7WlgQABRtdFfRnGAe3802yYvgPdquF9UyJl6LhTCzv3S9bBxcppsReiL6C/PjafNCx1+xMkQFvIyboEpw7gm/YxuHE3n48iXzGS5V04X9eEZLvOuPhTUUZbnxoxrHRqE6aiW5SlQrqgW1sum6E=) 2026-03-11 00:21:37.515543 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ8MStbdy1d1JtngFo7HHyxA4vWCIsBQVW8EKR3uca3W5Hv429itOquqqm4OtMJYNymAkV7W7eHncz+tHxcrcIs=) 2026-03-11 00:21:37.515554 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPGq01F/I3IJbWj97Kc0+cToja5LFyW2MuQ3raM1GGMr) 2026-03-11 00:21:37.515564 | orchestrator | 2026-03-11 00:21:37.515575 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:37.515586 | orchestrator | Wednesday 11 March 2026 00:21:37 +0000 (0:00:00.987) 0:00:11.418 ******* 2026-03-11 00:21:37.515605 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH0VcV17qmGwG2IJWdVZyaEoMyFSPK1d0WTKkjkXAxqYkcQKKIfxitNadQYCIlY4TL+6l4iFvTuqX+sdPZPfpTlmOps8eQmkwQrPxr/UdzGNAHo353FHSCXI0mYan0N2xcGW2MA5UwO4xuE8oQmS0aKT6YGWQsQx54J0XZ65oFaQzEQT3epxJTCRqE3TkpGb21qpHMWfBIEASL6j6x0b8jb5wiP6zU2NMA9q+HGgx6kAvq76IL5W7Zd44qQdLvqpCfQh8mOYair4omeGJh2wDoqHERMTrHVkofnjTyCi+yC6QyKMMbCepTVv8tvzuQb5HzVNehHJgs78S165z7Gt9ukGYskpHZnrJ6FYRlJadi5HmtiTXWz3IPjNtSqS6AIeIeoRTqSB2nntn8F2BNkZ5DQ/FTNv4HF1JXz+knz37DQ1REEUFVwUpBJB3NNkI1lRiaSsmcwk2L/s2Kq6ZfH+2h+VAuESY8fC8pSl8rdUx5K2VoHySdqOgrU3HdDYgIwVs=) 2026-03-11 00:21:48.502506 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEHH4GkqSEB9Ra+GuSBvk1ND5IakzmJdYjHRcbIwtxfq) 2026-03-11 00:21:48.502637 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJBzBmo/S3u/FA+BMeEIJII4BaIjbIUmlb2lCl+eJYFdnJpgVap/YK36lQ5wXlp27S/zf6ZM8a1wwpmFgrq3Ed8=) 2026-03-11 00:21:48.502654 | orchestrator | 2026-03-11 00:21:48.502667 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:48.502680 | orchestrator | Wednesday 11 March 2026 00:21:38 +0000 (0:00:00.995) 0:00:12.414 ******* 2026-03-11 00:21:48.502693 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0A5TRUrXQdvlwH8GkF9Ocyy5Xvgvv9pUBK/TIUYZLYbbazMyHK18Ui4qA870xA2drJoBGPL9ThSq+Un/KIw4vIS5knZAzZE0fCPAADTx73lCQ96orPPSMepkxK7T/QZIE+jV6UbPtm++mBqk3oBlfPtRQfIyseYdIyZqmgtzQ/G8ZIPMxdjSyRZZouArNQInkoMgu5+aFmvUsZT2ST0QEXcOgeiEkAllDu6Cuwy8x/AM8DCkANGRyUYOBT7tZimIXbmtVB67TT42r3T/H1n1oXL7+GpnfEPr6qLp8SwCk0gcsf+rrSg4m5xYMl63sf1ZmmeDVo/LBZONL/KBCqRfIkgSt/DIZJcJ4qfKmwn8ToFBcBf+WxXphAl+gcFio7Tsjm59ahJ5kQvPbZMg2Ty+n1ap1BacYfmR7Z1TDpCZdVWh5KNaH19pktCY4HZ4MaArMb7N9t3/GUx51Sg5UVZ4o7RSzLxpPmJdv7dVJH2R5fn73oAdONdTCHvDkN4kCaTE=) 2026-03-11 00:21:48.502707 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBErK8xaUMVYLlJapCejQgKzpwtdxnky0+Wb+wWLDGL/BWnco+94F14Kf6yTKB4ISa/rkbtuQEQ9IkxOdex2Zk+U=) 2026-03-11 00:21:48.502719 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEkcBLdeqmGKf0HcVmvpNFSkqVK8Jf7Z9QbHdZquBQJE) 2026-03-11 00:21:48.502730 | orchestrator | 2026-03-11 00:21:48.502742 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2026-03-11 00:21:48.502754 | orchestrator | Wednesday 11 March 2026 00:21:39 +0000 (0:00:01.016) 0:00:13.430 ******* 2026-03-11 00:21:48.502765 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2026-03-11 00:21:48.502777 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2026-03-11 00:21:48.502788 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2026-03-11 00:21:48.502799 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2026-03-11 00:21:48.502810 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2026-03-11 00:21:48.502839 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2026-03-11 00:21:48.502854 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2026-03-11 00:21:48.502900 | orchestrator | 2026-03-11 00:21:48.502920 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2026-03-11 00:21:48.502939 | orchestrator | Wednesday 11 March 2026 00:21:44 +0000 (0:00:05.104) 0:00:18.534 ******* 2026-03-11 00:21:48.502957 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2026-03-11 00:21:48.502973 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2026-03-11 00:21:48.502985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2026-03-11 00:21:48.502997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2026-03-11 00:21:48.503010 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2026-03-11 00:21:48.503022 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2026-03-11 00:21:48.503034 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2026-03-11 00:21:48.503046 | orchestrator | 2026-03-11 00:21:48.503059 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:48.503071 | orchestrator | Wednesday 11 March 2026 00:21:44 +0000 (0:00:00.164) 0:00:18.699 ******* 2026-03-11 00:21:48.503111 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDpZtCtpY6d/WSXi2+PvvGhlZPID3eURYK4niJjcvscObjH80BLK0MFgSZceWTySDHyOzClAR9TLziLRYxU3LUrAxUABJcOegGZbMgMKsEY+wbnjKjRACrJKcVpnbsA7crFM1HWOM++rVp77d0Ox4ywoDFlMZS5IyUn51rQJL1GAXhOqfiGmNCMNVlWyRyt0VpxdpPZqbJf9oVKN47DvlxCFI+XdIdoBUocSM1PG1qr3XEuFEGjrTW8REReFq0595mnZ8RiTQX1SdgxGLILE6iuPv0gEMFiHLO2MwT2ZEyKYsT1d3QbJkR/l1URJ18AFsuYAY43+mtMmZ7uP8+rQyDk40Ba5ZPvlNhdWTD19bEJJyRaPRPjUeP4JlndQwd9lZ1gKOee7SyChGkdBWvcPw5L9ejpTEoEaFH3sktPx2hL3dCQNvuQfuLvIqNX5H2t9MC+GI3YvsHIOaAClmtL8LA+D/rODMcE0KjTsH2ONanDMmk1JzsvaXv3xrk8E7XCV2c=) 2026-03-11 00:21:48.503126 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBMcGhvFsiYojaVH7j3cFCPOlvpU+s8CQ6aKyUvyImU3iYLQQtBnom9o1vrltG1ErKPjNpDVfwsoTDClhSrDR8us=) 2026-03-11 00:21:48.503139 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB1XkErJCxIQiCHD+e/yuKipeyJD8eMgUGEManCH5M2E) 2026-03-11 00:21:48.503151 | orchestrator | 2026-03-11 00:21:48.503164 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:48.503176 | orchestrator | Wednesday 11 March 2026 00:21:45 +0000 (0:00:01.006) 0:00:19.706 ******* 2026-03-11 00:21:48.503189 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKk4oEm3wEmTHz3jIpGW7BYgckh3zpKYF4fva1Q5U0lMLvNDzgSntPl0I8Vg2/W+vHbDKaznZ1Fy/eiJblIUfww=) 2026-03-11 00:21:48.503203 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDBy0ZfBO2xtD0xnUmmSxK0c1REbUU9MrNkbhfsqu3cXwQuofB3daDkjlvZKF1Ttp6mfuZeUk/8DCzaYhLEbjMsFpPHTLjBKijtlVqKP8l/NUk7So90DiDtZqLaGpEX7dGf0fPDd9UYxgpb7LuEhIwmMnFuI8yFyX/R74GWvnSbkRdoiIvS1tRVGB9aCxtfKSryGLSAK4MEBqckuZcQGWWGYO4DYglShQ7u57lz6T3w2SJaRiPBRQMGi8hB4dpA+0XynmHSEId6LdU7mm4L3ObceXDABbh5CZBDZdfv4HtKHZYdWnMhwKEReAMMyrCoDEO1SbRXEI220kTgmogymOOT3SqhTEU5S3maaFzCubFjNBlHDnLM4EpSURrhTRLz8RcjHuf3Li/TuJYK/VDkEXWLlx5ev9mV8tW026nLRSKGbPHTR1N1cOFCuYNyTJglG+a34OPSQudF8m7pCzm5sdJ3E2VUmusgVSjAPP4mvBgnf901+Kzxb1Xi8QcMesZel60=) 2026-03-11 00:21:48.503225 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBvinnGP4ZhfEcifMWq7fKrpBsQsxYcXr2WeV32NVZZV) 2026-03-11 00:21:48.503237 | orchestrator | 2026-03-11 00:21:48.503250 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:48.503263 | orchestrator | Wednesday 11 March 2026 00:21:46 +0000 (0:00:01.041) 0:00:20.747 ******* 2026-03-11 00:21:48.503275 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCjZ3Xq1TfHNatOoq+i7RZhxTqO1zlID6+TfBaAeHVISjit8AsVMkKrqzZS84VvkLTk71MhlxO/VqsbTB8e2iT44z1FusHlR0y8pniyVx/0ZAvkg8KkSc1+NCc6fMUeTdCbcyeAIk7yTtHlf6PyboP7IZrTWEJYQI5q7LwbkfDkjx8la6AzUHn/zWgIaKhwRxuO1tT2uzBE+hfarMQ1Lksv4ARNewGE7zkx+y/0vHStcZH/gL3xl91WVbEhGKAk7Vs5xXnL/nDcJNO10b2N+JIgoZa8ia+5mwbzMqcNIb+ww4LKdJqwU6wfYtwniRZEq8k+c/hCpIjdvYxKW75c75ub8D94m31b7fXtF01vogg923aiaI3SkOsXKfA4ldBYj6V+vlgsiGCkv2pRmCyFuiAF+vYaW85we1lBLA0o2Xr567GBazxBUW7BnqfcnsMqYlH+PHmbvhbph3kz131m0p26OY9F2FYmAYFrgZhsJNDiZ1dwMt4R6QnVkkm/RwE8Jec=) 2026-03-11 00:21:48.503289 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJC61iATa9vAbVNr1L18On8cr4uS8LQ/J6XiCZ8Ivet+HT7HuV3HQqTAqX2sUmRNHcSQISsWav+pe5RPMMG4jOQ=) 2026-03-11 00:21:48.503336 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIu+8PKXZem7u3nH2UB2MQ3N8eSPN+uJRdNCgASabjlx) 2026-03-11 00:21:48.503347 | orchestrator | 2026-03-11 00:21:48.503359 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:48.503370 | orchestrator | Wednesday 11 March 2026 00:21:47 +0000 (0:00:01.017) 0:00:21.764 ******* 2026-03-11 00:21:48.503381 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNOAFU7rH2nMGLRJxGjosVwXicsciQ/GQHID4EVOj0Phv0pD1TeuTyVr6Oh+9V4XhX/ppdRe55f+rlIfEKt17No=) 2026-03-11 00:21:48.503399 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCUQLEAxhd12v97jRg0KH45KL64LIYUpwhRSYBNLWGvvkkgu3Lg2WX/nRJnjwB33LVVyt2bmYJguNyj+v2sgpphx17zh+qlqPyqVAoLonnGlxI63JENw1vGFzu87eZGM9mToFLneYcPzaKUHvyB/T1WaQvdj+1InWzywMZMWeXv9jhtpu068Zeky8e7ditu08jx+DMEL82ao5QJ7zSxW/lpOo531r/5ISIIdv4dsOtNj8mBgFrHPlgoTKuXpnRBG0OCEShWMLZKU8jRnWLaesnqzXOx8y5kunYgZxaGv20OlkthXf7gthnCtCxdiwgX1TLOmzbDdBJSC6JO5XVju1GAO9ysk61yDVubTrvkj+ZLWv7uISr6TS2kLa7ToXtE89xtNGD6Nu3LfTklwm+R2SAPkRKyzOdg49Igq1mMuvvvkmzNKxw2/c3XwB+v5DU2ADhli1Dp52V55fFpdRCWAqJE3FSEEk/SvaisXRLppGUltd3tTWtd0n8mRsHNWIyd4lM=) 2026-03-11 00:21:48.503422 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEiqx/+RiyytM29CNTwYoSDzmUUnnoOiMf2ozFnpwUXp) 2026-03-11 00:21:52.718754 | orchestrator | 2026-03-11 00:21:52.718855 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:52.718869 | orchestrator | Wednesday 11 March 2026 00:21:48 +0000 (0:00:01.024) 0:00:22.789 ******* 2026-03-11 00:21:52.718879 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPGq01F/I3IJbWj97Kc0+cToja5LFyW2MuQ3raM1GGMr) 2026-03-11 00:21:52.718907 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC6QfFqK/81qgHOv8qKvnZyupuzdxMDDOHEtQ88laiBprhm7A3COfDqaLjAGcavt761zB+4kkKtoYd0MD8qur4m77yhZZz/tdEK7sMXD/CQh/E5HRKAbk2xNc/TkAGwyllBNS867g/kUIhC9bbpLos0mMFNuroW9Sb0Q+L+X/HbCumigHRYBsDw7kyWZ1TYsYkoNPByvIHG/nhvOJx3lcnEI3Iv2AmoIwoxZSMuWvl7fseaGM5iD3Av4FAWbo8zIKIFsgzCL5MOgsSSGliEKJQIt0vIB7YXB4M6VcBvr5lZ7WF1oQMS4FsYurjQFxYmm4/XMUooz9ZfAOmg0WnvXIf0/voAsUOf9El7Zn2q1pFO5L6+J7WlgQABRtdFfRnGAe3802yYvgPdquF9UyJl6LhTCzv3S9bBxcppsReiL6C/PjafNCx1+xMkQFvIyboEpw7gm/YxuHE3n48iXzGS5V04X9eEZLvOuPhTUUZbnxoxrHRqE6aiW5SlQrqgW1sum6E=) 2026-03-11 00:21:52.718938 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ8MStbdy1d1JtngFo7HHyxA4vWCIsBQVW8EKR3uca3W5Hv429itOquqqm4OtMJYNymAkV7W7eHncz+tHxcrcIs=) 2026-03-11 00:21:52.718950 | orchestrator | 2026-03-11 00:21:52.718959 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:52.718967 | orchestrator | Wednesday 11 March 2026 00:21:49 +0000 (0:00:01.011) 0:00:23.801 ******* 2026-03-11 00:21:52.718976 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEHH4GkqSEB9Ra+GuSBvk1ND5IakzmJdYjHRcbIwtxfq) 2026-03-11 00:21:52.718985 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDH0VcV17qmGwG2IJWdVZyaEoMyFSPK1d0WTKkjkXAxqYkcQKKIfxitNadQYCIlY4TL+6l4iFvTuqX+sdPZPfpTlmOps8eQmkwQrPxr/UdzGNAHo353FHSCXI0mYan0N2xcGW2MA5UwO4xuE8oQmS0aKT6YGWQsQx54J0XZ65oFaQzEQT3epxJTCRqE3TkpGb21qpHMWfBIEASL6j6x0b8jb5wiP6zU2NMA9q+HGgx6kAvq76IL5W7Zd44qQdLvqpCfQh8mOYair4omeGJh2wDoqHERMTrHVkofnjTyCi+yC6QyKMMbCepTVv8tvzuQb5HzVNehHJgs78S165z7Gt9ukGYskpHZnrJ6FYRlJadi5HmtiTXWz3IPjNtSqS6AIeIeoRTqSB2nntn8F2BNkZ5DQ/FTNv4HF1JXz+knz37DQ1REEUFVwUpBJB3NNkI1lRiaSsmcwk2L/s2Kq6ZfH+2h+VAuESY8fC8pSl8rdUx5K2VoHySdqOgrU3HdDYgIwVs=) 2026-03-11 00:21:52.718995 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJBzBmo/S3u/FA+BMeEIJII4BaIjbIUmlb2lCl+eJYFdnJpgVap/YK36lQ5wXlp27S/zf6ZM8a1wwpmFgrq3Ed8=) 2026-03-11 00:21:52.719003 | orchestrator | 2026-03-11 00:21:52.719012 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2026-03-11 00:21:52.719021 | orchestrator | Wednesday 11 March 2026 00:21:50 +0000 (0:00:01.022) 0:00:24.824 ******* 2026-03-11 00:21:52.719030 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEkcBLdeqmGKf0HcVmvpNFSkqVK8Jf7Z9QbHdZquBQJE) 2026-03-11 00:21:52.719039 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC0A5TRUrXQdvlwH8GkF9Ocyy5Xvgvv9pUBK/TIUYZLYbbazMyHK18Ui4qA870xA2drJoBGPL9ThSq+Un/KIw4vIS5knZAzZE0fCPAADTx73lCQ96orPPSMepkxK7T/QZIE+jV6UbPtm++mBqk3oBlfPtRQfIyseYdIyZqmgtzQ/G8ZIPMxdjSyRZZouArNQInkoMgu5+aFmvUsZT2ST0QEXcOgeiEkAllDu6Cuwy8x/AM8DCkANGRyUYOBT7tZimIXbmtVB67TT42r3T/H1n1oXL7+GpnfEPr6qLp8SwCk0gcsf+rrSg4m5xYMl63sf1ZmmeDVo/LBZONL/KBCqRfIkgSt/DIZJcJ4qfKmwn8ToFBcBf+WxXphAl+gcFio7Tsjm59ahJ5kQvPbZMg2Ty+n1ap1BacYfmR7Z1TDpCZdVWh5KNaH19pktCY4HZ4MaArMb7N9t3/GUx51Sg5UVZ4o7RSzLxpPmJdv7dVJH2R5fn73oAdONdTCHvDkN4kCaTE=) 2026-03-11 00:21:52.719048 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBErK8xaUMVYLlJapCejQgKzpwtdxnky0+Wb+wWLDGL/BWnco+94F14Kf6yTKB4ISa/rkbtuQEQ9IkxOdex2Zk+U=) 2026-03-11 00:21:52.719056 | orchestrator | 2026-03-11 00:21:52.719077 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2026-03-11 00:21:52.719095 | orchestrator | Wednesday 11 March 2026 00:21:51 +0000 (0:00:01.002) 0:00:25.827 ******* 2026-03-11 00:21:52.719150 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-11 00:21:52.719160 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-11 00:21:52.719168 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-11 00:21:52.719177 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-11 00:21:52.719185 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-11 00:21:52.719194 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-11 00:21:52.719203 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-11 00:21:52.719212 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:21:52.719221 | orchestrator | 2026-03-11 00:21:52.719244 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2026-03-11 00:21:52.719253 | orchestrator | Wednesday 11 March 2026 00:21:51 +0000 (0:00:00.163) 0:00:25.991 ******* 2026-03-11 00:21:52.719268 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:21:52.719277 | orchestrator | 2026-03-11 00:21:52.719323 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2026-03-11 00:21:52.719335 | orchestrator | Wednesday 11 March 2026 00:21:51 +0000 (0:00:00.042) 0:00:26.034 ******* 2026-03-11 00:21:52.719345 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:21:52.719355 | orchestrator | 2026-03-11 00:21:52.719365 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2026-03-11 00:21:52.719375 | orchestrator | Wednesday 11 March 2026 00:21:51 +0000 (0:00:00.050) 0:00:26.084 ******* 2026-03-11 00:21:52.719385 | orchestrator | changed: [testbed-manager] 2026-03-11 00:21:52.719394 | orchestrator | 2026-03-11 00:21:52.719404 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:21:52.719415 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:21:52.719426 | orchestrator | 2026-03-11 00:21:52.719436 | orchestrator | 2026-03-11 00:21:52.719446 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:21:52.719456 | orchestrator | Wednesday 11 March 2026 00:21:52 +0000 (0:00:00.696) 0:00:26.780 ******* 2026-03-11 00:21:52.719466 | orchestrator | =============================================================================== 2026-03-11 00:21:52.719476 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.94s 2026-03-11 00:21:52.719485 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.10s 2026-03-11 00:21:52.719496 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.13s 2026-03-11 00:21:52.719506 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-11 00:21:52.719516 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2026-03-11 00:21:52.719525 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-11 00:21:52.719535 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-11 00:21:52.719545 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-11 00:21:52.719555 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2026-03-11 00:21:52.719564 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-11 00:21:52.719576 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.01s 2026-03-11 00:21:52.719585 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-11 00:21:52.719595 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2026-03-11 00:21:52.719604 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-03-11 00:21:52.719621 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-03-11 00:21:52.719632 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.99s 2026-03-11 00:21:52.719641 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.70s 2026-03-11 00:21:52.719651 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.17s 2026-03-11 00:21:52.719660 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2026-03-11 00:21:52.719669 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.16s 2026-03-11 00:21:52.989760 | orchestrator | + osism apply squid 2026-03-11 00:22:05.142455 | orchestrator | 2026-03-11 00:22:05 | INFO  | Prepare task for execution of squid. 2026-03-11 00:22:05.203420 | orchestrator | 2026-03-11 00:22:05 | INFO  | Task 14a20a30-bfe0-4bfc-9a88-7ed93921bf6b (squid) was prepared for execution. 2026-03-11 00:22:05.203499 | orchestrator | 2026-03-11 00:22:05 | INFO  | It takes a moment until task 14a20a30-bfe0-4bfc-9a88-7ed93921bf6b (squid) has been started and output is visible here. 2026-03-11 00:24:04.245598 | orchestrator | 2026-03-11 00:24:04.245716 | orchestrator | PLAY [Apply role squid] ******************************************************** 2026-03-11 00:24:04.245735 | orchestrator | 2026-03-11 00:24:04.245748 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2026-03-11 00:24:04.245760 | orchestrator | Wednesday 11 March 2026 00:22:09 +0000 (0:00:00.116) 0:00:00.116 ******* 2026-03-11 00:24:04.245772 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2026-03-11 00:24:04.245784 | orchestrator | 2026-03-11 00:24:04.245795 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2026-03-11 00:24:04.245806 | orchestrator | Wednesday 11 March 2026 00:22:09 +0000 (0:00:00.062) 0:00:00.179 ******* 2026-03-11 00:24:04.245817 | orchestrator | ok: [testbed-manager] 2026-03-11 00:24:04.245829 | orchestrator | 2026-03-11 00:24:04.245840 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2026-03-11 00:24:04.245851 | orchestrator | Wednesday 11 March 2026 00:22:10 +0000 (0:00:01.131) 0:00:01.311 ******* 2026-03-11 00:24:04.245875 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2026-03-11 00:24:04.245887 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2026-03-11 00:24:04.245898 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2026-03-11 00:24:04.245909 | orchestrator | 2026-03-11 00:24:04.245920 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2026-03-11 00:24:04.245930 | orchestrator | Wednesday 11 March 2026 00:22:11 +0000 (0:00:00.981) 0:00:02.292 ******* 2026-03-11 00:24:04.245941 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2026-03-11 00:24:04.245952 | orchestrator | 2026-03-11 00:24:04.245963 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2026-03-11 00:24:04.245973 | orchestrator | Wednesday 11 March 2026 00:22:12 +0000 (0:00:00.933) 0:00:03.225 ******* 2026-03-11 00:24:04.245984 | orchestrator | ok: [testbed-manager] 2026-03-11 00:24:04.245995 | orchestrator | 2026-03-11 00:24:04.246006 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2026-03-11 00:24:04.246103 | orchestrator | Wednesday 11 March 2026 00:22:12 +0000 (0:00:00.308) 0:00:03.534 ******* 2026-03-11 00:24:04.246121 | orchestrator | changed: [testbed-manager] 2026-03-11 00:24:04.246134 | orchestrator | 2026-03-11 00:24:04.246146 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2026-03-11 00:24:04.246160 | orchestrator | Wednesday 11 March 2026 00:22:13 +0000 (0:00:00.804) 0:00:04.338 ******* 2026-03-11 00:24:04.246173 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2026-03-11 00:24:04.246187 | orchestrator | ok: [testbed-manager] 2026-03-11 00:24:04.246199 | orchestrator | 2026-03-11 00:24:04.246212 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2026-03-11 00:24:04.246225 | orchestrator | Wednesday 11 March 2026 00:22:47 +0000 (0:00:34.295) 0:00:38.634 ******* 2026-03-11 00:24:04.246239 | orchestrator | changed: [testbed-manager] 2026-03-11 00:24:04.246251 | orchestrator | 2026-03-11 00:24:04.246263 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2026-03-11 00:24:04.246276 | orchestrator | Wednesday 11 March 2026 00:23:03 +0000 (0:00:15.706) 0:00:54.341 ******* 2026-03-11 00:24:04.246289 | orchestrator | Pausing for 60 seconds 2026-03-11 00:24:04.246303 | orchestrator | changed: [testbed-manager] 2026-03-11 00:24:04.246315 | orchestrator | 2026-03-11 00:24:04.246328 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2026-03-11 00:24:04.246341 | orchestrator | Wednesday 11 March 2026 00:24:03 +0000 (0:01:00.074) 0:01:54.415 ******* 2026-03-11 00:24:04.246353 | orchestrator | ok: [testbed-manager] 2026-03-11 00:24:04.246366 | orchestrator | 2026-03-11 00:24:04.246379 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2026-03-11 00:24:04.246414 | orchestrator | Wednesday 11 March 2026 00:24:03 +0000 (0:00:00.069) 0:01:54.484 ******* 2026-03-11 00:24:04.246427 | orchestrator | changed: [testbed-manager] 2026-03-11 00:24:04.246439 | orchestrator | 2026-03-11 00:24:04.246450 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:24:04.246461 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:24:04.246472 | orchestrator | 2026-03-11 00:24:04.246483 | orchestrator | 2026-03-11 00:24:04.246494 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:24:04.246505 | orchestrator | Wednesday 11 March 2026 00:24:04 +0000 (0:00:00.587) 0:01:55.072 ******* 2026-03-11 00:24:04.246516 | orchestrator | =============================================================================== 2026-03-11 00:24:04.246526 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2026-03-11 00:24:04.246537 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 34.30s 2026-03-11 00:24:04.246548 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.71s 2026-03-11 00:24:04.246558 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.13s 2026-03-11 00:24:04.246569 | orchestrator | osism.services.squid : Create required directories ---------------------- 0.98s 2026-03-11 00:24:04.246580 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 0.93s 2026-03-11 00:24:04.246590 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.80s 2026-03-11 00:24:04.246601 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.59s 2026-03-11 00:24:04.246612 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.31s 2026-03-11 00:24:04.246646 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.07s 2026-03-11 00:24:04.246658 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.06s 2026-03-11 00:24:04.513941 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-11 00:24:04.514149 | orchestrator | + /opt/configuration/scripts/set-kolla-namespace.sh kolla 2026-03-11 00:24:04.519753 | orchestrator | + set -e 2026-03-11 00:24:04.519817 | orchestrator | + NAMESPACE=kolla 2026-03-11 00:24:04.519833 | orchestrator | + sed -i 's#docker_namespace: .*#docker_namespace: kolla#g' /opt/configuration/inventory/group_vars/all/kolla.yml 2026-03-11 00:24:04.523019 | orchestrator | ++ semver latest 9.0.0 2026-03-11 00:24:04.563324 | orchestrator | + [[ -1 -lt 0 ]] 2026-03-11 00:24:04.563412 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2026-03-11 00:24:04.564091 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2026-03-11 00:24:16.552789 | orchestrator | 2026-03-11 00:24:16 | INFO  | Prepare task for execution of operator. 2026-03-11 00:24:16.620855 | orchestrator | 2026-03-11 00:24:16 | INFO  | Task c4355536-d476-4d56-9f76-7a1b8fb9f02e (operator) was prepared for execution. 2026-03-11 00:24:16.620947 | orchestrator | 2026-03-11 00:24:16 | INFO  | It takes a moment until task c4355536-d476-4d56-9f76-7a1b8fb9f02e (operator) has been started and output is visible here. 2026-03-11 00:24:31.772776 | orchestrator | 2026-03-11 00:24:31.772873 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2026-03-11 00:24:31.772886 | orchestrator | 2026-03-11 00:24:31.772894 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 00:24:31.772902 | orchestrator | Wednesday 11 March 2026 00:24:20 +0000 (0:00:00.103) 0:00:00.103 ******* 2026-03-11 00:24:31.772910 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:24:31.772919 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:24:31.772926 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:24:31.772934 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:24:31.772941 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:24:31.772948 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:24:31.772959 | orchestrator | 2026-03-11 00:24:31.772967 | orchestrator | TASK [Do not require tty for all users] **************************************** 2026-03-11 00:24:31.772994 | orchestrator | Wednesday 11 March 2026 00:24:23 +0000 (0:00:03.313) 0:00:03.416 ******* 2026-03-11 00:24:31.773044 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:24:31.773053 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:24:31.773060 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:24:31.773067 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:24:31.773074 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:24:31.773081 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:24:31.773088 | orchestrator | 2026-03-11 00:24:31.773095 | orchestrator | PLAY [Apply role operator] ***************************************************** 2026-03-11 00:24:31.773102 | orchestrator | 2026-03-11 00:24:31.773109 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2026-03-11 00:24:31.773116 | orchestrator | Wednesday 11 March 2026 00:24:24 +0000 (0:00:00.729) 0:00:04.145 ******* 2026-03-11 00:24:31.773124 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:24:31.773131 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:24:31.773138 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:24:31.773145 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:24:31.773152 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:24:31.773159 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:24:31.773166 | orchestrator | 2026-03-11 00:24:31.773173 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2026-03-11 00:24:31.773180 | orchestrator | Wednesday 11 March 2026 00:24:24 +0000 (0:00:00.134) 0:00:04.280 ******* 2026-03-11 00:24:31.773187 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:24:31.773194 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:24:31.773201 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:24:31.773208 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:24:31.773215 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:24:31.773221 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:24:31.773230 | orchestrator | 2026-03-11 00:24:31.773259 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2026-03-11 00:24:31.773272 | orchestrator | Wednesday 11 March 2026 00:24:24 +0000 (0:00:00.127) 0:00:04.407 ******* 2026-03-11 00:24:31.773284 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:24:31.773296 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:24:31.773308 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:24:31.773320 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:24:31.773339 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:24:31.773353 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:24:31.773376 | orchestrator | 2026-03-11 00:24:31.773396 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2026-03-11 00:24:31.773408 | orchestrator | Wednesday 11 March 2026 00:24:25 +0000 (0:00:00.597) 0:00:05.005 ******* 2026-03-11 00:24:31.773420 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:24:31.773432 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:24:31.773444 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:24:31.773456 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:24:31.773489 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:24:31.773502 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:24:31.773517 | orchestrator | 2026-03-11 00:24:31.773529 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2026-03-11 00:24:31.773545 | orchestrator | Wednesday 11 March 2026 00:24:26 +0000 (0:00:00.842) 0:00:05.847 ******* 2026-03-11 00:24:31.773561 | orchestrator | changed: [testbed-node-0] => (item=adm) 2026-03-11 00:24:31.773584 | orchestrator | changed: [testbed-node-2] => (item=adm) 2026-03-11 00:24:31.773608 | orchestrator | changed: [testbed-node-3] => (item=adm) 2026-03-11 00:24:31.773632 | orchestrator | changed: [testbed-node-4] => (item=adm) 2026-03-11 00:24:31.773656 | orchestrator | changed: [testbed-node-1] => (item=adm) 2026-03-11 00:24:31.773688 | orchestrator | changed: [testbed-node-5] => (item=adm) 2026-03-11 00:24:31.773712 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2026-03-11 00:24:31.773725 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2026-03-11 00:24:31.773738 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2026-03-11 00:24:31.773764 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2026-03-11 00:24:31.773777 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2026-03-11 00:24:31.773790 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2026-03-11 00:24:31.773802 | orchestrator | 2026-03-11 00:24:31.773814 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2026-03-11 00:24:31.773827 | orchestrator | Wednesday 11 March 2026 00:24:27 +0000 (0:00:01.221) 0:00:07.069 ******* 2026-03-11 00:24:31.773838 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:24:31.773849 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:24:31.773859 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:24:31.773872 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:24:31.773883 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:24:31.773894 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:24:31.773904 | orchestrator | 2026-03-11 00:24:31.773916 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2026-03-11 00:24:31.773930 | orchestrator | Wednesday 11 March 2026 00:24:28 +0000 (0:00:01.175) 0:00:08.244 ******* 2026-03-11 00:24:31.773943 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:24:31.773955 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:24:31.773967 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:24:31.773979 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:24:31.773991 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:24:31.774111 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2026-03-11 00:24:31.774130 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2026-03-11 00:24:31.774141 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2026-03-11 00:24:31.774152 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2026-03-11 00:24:31.774164 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2026-03-11 00:24:31.774176 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2026-03-11 00:24:31.774188 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2026-03-11 00:24:31.774199 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:24:31.774211 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2026-03-11 00:24:31.774224 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2026-03-11 00:24:31.774246 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2026-03-11 00:24:31.774258 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:24:31.774269 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:24:31.774281 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:24:31.774293 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:24:31.774305 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2026-03-11 00:24:31.774317 | orchestrator | 2026-03-11 00:24:31.774328 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2026-03-11 00:24:31.774342 | orchestrator | Wednesday 11 March 2026 00:24:29 +0000 (0:00:01.259) 0:00:09.503 ******* 2026-03-11 00:24:31.774355 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:24:31.774368 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:24:31.774380 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:24:31.774392 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:24:31.774404 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:24:31.774415 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:24:31.774428 | orchestrator | 2026-03-11 00:24:31.774440 | orchestrator | TASK [osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file] *** 2026-03-11 00:24:31.774463 | orchestrator | Wednesday 11 March 2026 00:24:29 +0000 (0:00:00.142) 0:00:09.646 ******* 2026-03-11 00:24:31.774476 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:24:31.774488 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:24:31.774499 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:24:31.774510 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:24:31.774522 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:24:31.774533 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:24:31.774545 | orchestrator | 2026-03-11 00:24:31.774557 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2026-03-11 00:24:31.774568 | orchestrator | Wednesday 11 March 2026 00:24:29 +0000 (0:00:00.156) 0:00:09.802 ******* 2026-03-11 00:24:31.774579 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:24:31.774591 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:24:31.774602 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:24:31.774613 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:24:31.774626 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:24:31.774638 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:24:31.774651 | orchestrator | 2026-03-11 00:24:31.774662 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2026-03-11 00:24:31.774675 | orchestrator | Wednesday 11 March 2026 00:24:30 +0000 (0:00:00.632) 0:00:10.435 ******* 2026-03-11 00:24:31.774686 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:24:31.774698 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:24:31.774710 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:24:31.774723 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:24:31.774735 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:24:31.774748 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:24:31.774761 | orchestrator | 2026-03-11 00:24:31.774774 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2026-03-11 00:24:31.774787 | orchestrator | Wednesday 11 March 2026 00:24:30 +0000 (0:00:00.157) 0:00:10.592 ******* 2026-03-11 00:24:31.774801 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 00:24:31.774814 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:24:31.774826 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 00:24:31.774838 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:24:31.774851 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 00:24:31.774864 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:24:31.774877 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 00:24:31.774892 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:24:31.774904 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-11 00:24:31.774917 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-11 00:24:31.774930 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:24:31.774943 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:24:31.774955 | orchestrator | 2026-03-11 00:24:31.774968 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2026-03-11 00:24:31.774981 | orchestrator | Wednesday 11 March 2026 00:24:31 +0000 (0:00:00.711) 0:00:11.303 ******* 2026-03-11 00:24:31.774994 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:24:31.775068 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:24:31.775081 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:24:31.775093 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:24:31.775106 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:24:31.775120 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:24:31.775133 | orchestrator | 2026-03-11 00:24:31.775145 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2026-03-11 00:24:31.775157 | orchestrator | Wednesday 11 March 2026 00:24:31 +0000 (0:00:00.156) 0:00:11.460 ******* 2026-03-11 00:24:31.775170 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:24:31.775182 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:24:31.775195 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:24:31.775207 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:24:31.775243 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:24:32.996909 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:24:32.997037 | orchestrator | 2026-03-11 00:24:32.997055 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2026-03-11 00:24:32.997068 | orchestrator | Wednesday 11 March 2026 00:24:31 +0000 (0:00:00.128) 0:00:11.589 ******* 2026-03-11 00:24:32.997079 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:24:32.997090 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:24:32.997101 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:24:32.997112 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:24:32.997122 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:24:32.997133 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:24:32.997144 | orchestrator | 2026-03-11 00:24:32.997154 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2026-03-11 00:24:32.997166 | orchestrator | Wednesday 11 March 2026 00:24:31 +0000 (0:00:00.124) 0:00:11.713 ******* 2026-03-11 00:24:32.997176 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:24:32.997187 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:24:32.997198 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:24:32.997209 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:24:32.997220 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:24:32.997230 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:24:32.997241 | orchestrator | 2026-03-11 00:24:32.997251 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2026-03-11 00:24:32.997262 | orchestrator | Wednesday 11 March 2026 00:24:32 +0000 (0:00:00.671) 0:00:12.385 ******* 2026-03-11 00:24:32.997273 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:24:32.997284 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:24:32.997295 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:24:32.997305 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:24:32.997316 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:24:32.997326 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:24:32.997337 | orchestrator | 2026-03-11 00:24:32.997347 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:24:32.997359 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 00:24:32.997372 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 00:24:32.997404 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 00:24:32.997416 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 00:24:32.997426 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 00:24:32.997437 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 00:24:32.997448 | orchestrator | 2026-03-11 00:24:32.997459 | orchestrator | 2026-03-11 00:24:32.997470 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:24:32.997481 | orchestrator | Wednesday 11 March 2026 00:24:32 +0000 (0:00:00.207) 0:00:12.593 ******* 2026-03-11 00:24:32.997492 | orchestrator | =============================================================================== 2026-03-11 00:24:32.997502 | orchestrator | Gathering Facts --------------------------------------------------------- 3.31s 2026-03-11 00:24:32.997513 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.26s 2026-03-11 00:24:32.997525 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.22s 2026-03-11 00:24:32.997556 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.18s 2026-03-11 00:24:32.997567 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.84s 2026-03-11 00:24:32.997578 | orchestrator | Do not require tty for all users ---------------------------------------- 0.73s 2026-03-11 00:24:32.997589 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.71s 2026-03-11 00:24:32.997599 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.67s 2026-03-11 00:24:32.997610 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.63s 2026-03-11 00:24:32.997621 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.60s 2026-03-11 00:24:32.997631 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.21s 2026-03-11 00:24:32.997642 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2026-03-11 00:24:32.997653 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.16s 2026-03-11 00:24:32.997664 | orchestrator | osism.commons.operator : Set custom PS1 prompt in .bashrc configuration file --- 0.16s 2026-03-11 00:24:32.997674 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.14s 2026-03-11 00:24:32.997685 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.13s 2026-03-11 00:24:32.997696 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.13s 2026-03-11 00:24:32.997706 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.13s 2026-03-11 00:24:32.997717 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.12s 2026-03-11 00:24:33.265841 | orchestrator | + osism apply --environment custom facts 2026-03-11 00:24:35.184825 | orchestrator | 2026-03-11 00:24:35 | INFO  | Trying to run play facts in environment custom 2026-03-11 00:24:45.272045 | orchestrator | 2026-03-11 00:24:45 | INFO  | Prepare task for execution of facts. 2026-03-11 00:24:45.345483 | orchestrator | 2026-03-11 00:24:45 | INFO  | Task d7dd3add-d6e0-46fd-b42d-688f6bc16da3 (facts) was prepared for execution. 2026-03-11 00:24:45.345605 | orchestrator | 2026-03-11 00:24:45 | INFO  | It takes a moment until task d7dd3add-d6e0-46fd-b42d-688f6bc16da3 (facts) has been started and output is visible here. 2026-03-11 00:25:27.856197 | orchestrator | 2026-03-11 00:25:27.856317 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2026-03-11 00:25:27.856335 | orchestrator | 2026-03-11 00:25:27.856347 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-11 00:25:27.856375 | orchestrator | Wednesday 11 March 2026 00:24:48 +0000 (0:00:00.048) 0:00:00.048 ******* 2026-03-11 00:25:27.856387 | orchestrator | ok: [testbed-manager] 2026-03-11 00:25:27.856399 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:25:27.856411 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:27.856422 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:27.856433 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:25:27.856443 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:27.856454 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:25:27.856465 | orchestrator | 2026-03-11 00:25:27.856476 | orchestrator | TASK [Copy fact file] ********************************************************** 2026-03-11 00:25:27.856487 | orchestrator | Wednesday 11 March 2026 00:24:50 +0000 (0:00:01.409) 0:00:01.458 ******* 2026-03-11 00:25:27.856498 | orchestrator | ok: [testbed-manager] 2026-03-11 00:25:27.856509 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:25:27.856519 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:27.856530 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:25:27.856542 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:27.856553 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:27.856564 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:25:27.856574 | orchestrator | 2026-03-11 00:25:27.856606 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2026-03-11 00:25:27.856618 | orchestrator | 2026-03-11 00:25:27.856628 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-11 00:25:27.856639 | orchestrator | Wednesday 11 March 2026 00:24:51 +0000 (0:00:01.183) 0:00:02.642 ******* 2026-03-11 00:25:27.856650 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:27.856661 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:27.856671 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:27.856682 | orchestrator | 2026-03-11 00:25:27.856716 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-11 00:25:27.856730 | orchestrator | Wednesday 11 March 2026 00:24:51 +0000 (0:00:00.075) 0:00:02.718 ******* 2026-03-11 00:25:27.856742 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:27.856754 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:27.856767 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:27.856778 | orchestrator | 2026-03-11 00:25:27.856791 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-11 00:25:27.856803 | orchestrator | Wednesday 11 March 2026 00:24:51 +0000 (0:00:00.160) 0:00:02.878 ******* 2026-03-11 00:25:27.856815 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:27.856827 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:27.856839 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:27.856851 | orchestrator | 2026-03-11 00:25:27.856863 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-11 00:25:27.856875 | orchestrator | Wednesday 11 March 2026 00:24:52 +0000 (0:00:00.181) 0:00:03.060 ******* 2026-03-11 00:25:27.856889 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:25:27.856902 | orchestrator | 2026-03-11 00:25:27.856936 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-11 00:25:27.856949 | orchestrator | Wednesday 11 March 2026 00:24:52 +0000 (0:00:00.149) 0:00:03.209 ******* 2026-03-11 00:25:27.856961 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:27.856972 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:27.856984 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:27.856996 | orchestrator | 2026-03-11 00:25:27.857009 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-11 00:25:27.857022 | orchestrator | Wednesday 11 March 2026 00:24:52 +0000 (0:00:00.403) 0:00:03.613 ******* 2026-03-11 00:25:27.857034 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:25:27.857046 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:25:27.857057 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:25:27.857068 | orchestrator | 2026-03-11 00:25:27.857078 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-11 00:25:27.857089 | orchestrator | Wednesday 11 March 2026 00:24:52 +0000 (0:00:00.115) 0:00:03.728 ******* 2026-03-11 00:25:27.857100 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:27.857111 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:27.857121 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:27.857132 | orchestrator | 2026-03-11 00:25:27.857142 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-11 00:25:27.857153 | orchestrator | Wednesday 11 March 2026 00:24:53 +0000 (0:00:00.980) 0:00:04.708 ******* 2026-03-11 00:25:27.857164 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:27.857174 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:27.857185 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:27.857195 | orchestrator | 2026-03-11 00:25:27.857206 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-11 00:25:27.857217 | orchestrator | Wednesday 11 March 2026 00:24:54 +0000 (0:00:00.440) 0:00:05.149 ******* 2026-03-11 00:25:27.857228 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:27.857238 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:27.857249 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:27.857260 | orchestrator | 2026-03-11 00:25:27.857280 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-11 00:25:27.857291 | orchestrator | Wednesday 11 March 2026 00:24:55 +0000 (0:00:01.105) 0:00:06.255 ******* 2026-03-11 00:25:27.857301 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:27.857312 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:27.857322 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:27.857333 | orchestrator | 2026-03-11 00:25:27.857344 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2026-03-11 00:25:27.857354 | orchestrator | Wednesday 11 March 2026 00:25:10 +0000 (0:00:15.581) 0:00:21.837 ******* 2026-03-11 00:25:27.857365 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:25:27.857375 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:25:27.857386 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:25:27.857397 | orchestrator | 2026-03-11 00:25:27.857407 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2026-03-11 00:25:27.857435 | orchestrator | Wednesday 11 March 2026 00:25:10 +0000 (0:00:00.087) 0:00:21.924 ******* 2026-03-11 00:25:27.857446 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:27.857457 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:27.857467 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:27.857478 | orchestrator | 2026-03-11 00:25:27.857489 | orchestrator | TASK [Create custom facts directory] ******************************************* 2026-03-11 00:25:27.857500 | orchestrator | Wednesday 11 March 2026 00:25:18 +0000 (0:00:07.976) 0:00:29.901 ******* 2026-03-11 00:25:27.857511 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:27.857522 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:27.857532 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:27.857543 | orchestrator | 2026-03-11 00:25:27.857554 | orchestrator | TASK [Copy fact files] ********************************************************* 2026-03-11 00:25:27.857564 | orchestrator | Wednesday 11 March 2026 00:25:19 +0000 (0:00:00.451) 0:00:30.353 ******* 2026-03-11 00:25:27.857575 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2026-03-11 00:25:27.857586 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2026-03-11 00:25:27.857597 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2026-03-11 00:25:27.857608 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2026-03-11 00:25:27.857618 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2026-03-11 00:25:27.857629 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2026-03-11 00:25:27.857640 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2026-03-11 00:25:27.857651 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2026-03-11 00:25:27.857661 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2026-03-11 00:25:27.857672 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2026-03-11 00:25:27.857683 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2026-03-11 00:25:27.857693 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2026-03-11 00:25:27.857704 | orchestrator | 2026-03-11 00:25:27.857715 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-11 00:25:27.857725 | orchestrator | Wednesday 11 March 2026 00:25:22 +0000 (0:00:03.496) 0:00:33.850 ******* 2026-03-11 00:25:27.857736 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:27.857747 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:27.857757 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:27.857768 | orchestrator | 2026-03-11 00:25:27.857779 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-11 00:25:27.857793 | orchestrator | 2026-03-11 00:25:27.857810 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-11 00:25:27.857829 | orchestrator | Wednesday 11 March 2026 00:25:24 +0000 (0:00:01.305) 0:00:35.155 ******* 2026-03-11 00:25:27.857853 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:25:27.857890 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:25:27.857907 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:25:27.857949 | orchestrator | ok: [testbed-manager] 2026-03-11 00:25:27.857968 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:27.857986 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:27.858005 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:27.858088 | orchestrator | 2026-03-11 00:25:27.858101 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:25:27.858158 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:25:27.858172 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:25:27.858184 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:25:27.858195 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:25:27.858206 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:25:27.858217 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:25:27.858228 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:25:27.858238 | orchestrator | 2026-03-11 00:25:27.858249 | orchestrator | 2026-03-11 00:25:27.858260 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:25:27.858271 | orchestrator | Wednesday 11 March 2026 00:25:27 +0000 (0:00:03.735) 0:00:38.891 ******* 2026-03-11 00:25:27.858281 | orchestrator | =============================================================================== 2026-03-11 00:25:27.858292 | orchestrator | osism.commons.repository : Update package cache ------------------------ 15.58s 2026-03-11 00:25:27.858302 | orchestrator | Install required packages (Debian) -------------------------------------- 7.98s 2026-03-11 00:25:27.858313 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.74s 2026-03-11 00:25:27.858323 | orchestrator | Copy fact files --------------------------------------------------------- 3.50s 2026-03-11 00:25:27.858334 | orchestrator | Create custom facts directory ------------------------------------------- 1.41s 2026-03-11 00:25:27.858344 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.31s 2026-03-11 00:25:27.858366 | orchestrator | Copy fact file ---------------------------------------------------------- 1.18s 2026-03-11 00:25:28.053609 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.11s 2026-03-11 00:25:28.053770 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.98s 2026-03-11 00:25:28.053794 | orchestrator | Create custom facts directory ------------------------------------------- 0.45s 2026-03-11 00:25:28.053813 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.44s 2026-03-11 00:25:28.053831 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.40s 2026-03-11 00:25:28.053850 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.18s 2026-03-11 00:25:28.053867 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.16s 2026-03-11 00:25:28.053885 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.15s 2026-03-11 00:25:28.053903 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.12s 2026-03-11 00:25:28.053964 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.09s 2026-03-11 00:25:28.053985 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.08s 2026-03-11 00:25:28.350691 | orchestrator | + osism apply bootstrap 2026-03-11 00:25:40.338718 | orchestrator | 2026-03-11 00:25:40 | INFO  | Prepare task for execution of bootstrap. 2026-03-11 00:25:40.415063 | orchestrator | 2026-03-11 00:25:40 | INFO  | Task 4a2dcfa8-487f-4bef-bf6e-743d052f0849 (bootstrap) was prepared for execution. 2026-03-11 00:25:40.415156 | orchestrator | 2026-03-11 00:25:40 | INFO  | It takes a moment until task 4a2dcfa8-487f-4bef-bf6e-743d052f0849 (bootstrap) has been started and output is visible here. 2026-03-11 00:25:56.403269 | orchestrator | 2026-03-11 00:25:56.403409 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2026-03-11 00:25:56.403436 | orchestrator | 2026-03-11 00:25:56.403455 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2026-03-11 00:25:56.403473 | orchestrator | Wednesday 11 March 2026 00:25:44 +0000 (0:00:00.102) 0:00:00.102 ******* 2026-03-11 00:25:56.403492 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:56.403511 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:56.403527 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:56.403545 | orchestrator | ok: [testbed-manager] 2026-03-11 00:25:56.403562 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:25:56.403579 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:25:56.403597 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:25:56.403613 | orchestrator | 2026-03-11 00:25:56.403631 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-11 00:25:56.403647 | orchestrator | 2026-03-11 00:25:56.403664 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-11 00:25:56.403682 | orchestrator | Wednesday 11 March 2026 00:25:44 +0000 (0:00:00.201) 0:00:00.303 ******* 2026-03-11 00:25:56.403700 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:25:56.403719 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:25:56.403758 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:25:56.403776 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:56.403797 | orchestrator | ok: [testbed-manager] 2026-03-11 00:25:56.403815 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:56.403835 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:56.403854 | orchestrator | 2026-03-11 00:25:56.403908 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2026-03-11 00:25:56.403929 | orchestrator | 2026-03-11 00:25:56.403948 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-11 00:25:56.403968 | orchestrator | Wednesday 11 March 2026 00:25:48 +0000 (0:00:03.797) 0:00:04.101 ******* 2026-03-11 00:25:56.403989 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:25:56.404009 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:25:56.404027 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2026-03-11 00:25:56.404048 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:25:56.404066 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2026-03-11 00:25:56.404085 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2026-03-11 00:25:56.404104 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2026-03-11 00:25:56.404121 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2026-03-11 00:25:56.404139 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2026-03-11 00:25:56.404159 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-11 00:25:56.404177 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2026-03-11 00:25:56.404195 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-11 00:25:56.404214 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2026-03-11 00:25:56.404233 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-11 00:25:56.404253 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2026-03-11 00:25:56.404273 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2026-03-11 00:25:56.404331 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-11 00:25:56.404353 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:25:56.404371 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2026-03-11 00:25:56.404390 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2026-03-11 00:25:56.404408 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-11 00:25:56.404427 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-11 00:25:56.404446 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-11 00:25:56.404466 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2026-03-11 00:25:56.404484 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:25:56.404503 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2026-03-11 00:25:56.404522 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2026-03-11 00:25:56.404561 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-11 00:25:56.404583 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2026-03-11 00:25:56.404603 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-11 00:25:56.404623 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2026-03-11 00:25:56.404643 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:25:56.404663 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2026-03-11 00:25:56.404683 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2026-03-11 00:25:56.404703 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2026-03-11 00:25:56.404724 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2026-03-11 00:25:56.404744 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2026-03-11 00:25:56.404765 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2026-03-11 00:25:56.404785 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-11 00:25:56.404806 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-11 00:25:56.404826 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2026-03-11 00:25:56.404907 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2026-03-11 00:25:56.404927 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2026-03-11 00:25:56.404947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-11 00:25:56.404966 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-11 00:25:56.404986 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-11 00:25:56.405036 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2026-03-11 00:25:56.405058 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:25:56.405078 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-11 00:25:56.405098 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:25:56.405118 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-11 00:25:56.405137 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-11 00:25:56.405157 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-11 00:25:56.405177 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:25:56.405197 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-11 00:25:56.405217 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:25:56.405237 | orchestrator | 2026-03-11 00:25:56.405257 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2026-03-11 00:25:56.405277 | orchestrator | 2026-03-11 00:25:56.405298 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2026-03-11 00:25:56.405319 | orchestrator | Wednesday 11 March 2026 00:25:48 +0000 (0:00:00.363) 0:00:04.465 ******* 2026-03-11 00:25:56.405338 | orchestrator | ok: [testbed-manager] 2026-03-11 00:25:56.405358 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:56.405393 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:56.405412 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:25:56.405430 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:25:56.405449 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:56.405469 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:25:56.405488 | orchestrator | 2026-03-11 00:25:56.405508 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2026-03-11 00:25:56.405529 | orchestrator | Wednesday 11 March 2026 00:25:50 +0000 (0:00:01.280) 0:00:05.746 ******* 2026-03-11 00:25:56.405549 | orchestrator | ok: [testbed-manager] 2026-03-11 00:25:56.405569 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:25:56.405589 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:25:56.405609 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:25:56.405628 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:25:56.405645 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:25:56.405664 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:25:56.405683 | orchestrator | 2026-03-11 00:25:56.405701 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2026-03-11 00:25:56.405718 | orchestrator | Wednesday 11 March 2026 00:25:51 +0000 (0:00:01.228) 0:00:06.975 ******* 2026-03-11 00:25:56.405737 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:25:56.405760 | orchestrator | 2026-03-11 00:25:56.405779 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2026-03-11 00:25:56.405798 | orchestrator | Wednesday 11 March 2026 00:25:51 +0000 (0:00:00.279) 0:00:07.254 ******* 2026-03-11 00:25:56.405815 | orchestrator | changed: [testbed-manager] 2026-03-11 00:25:56.405832 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:56.405849 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:56.405866 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:56.405910 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:25:56.405927 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:25:56.405943 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:25:56.405960 | orchestrator | 2026-03-11 00:25:56.405979 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2026-03-11 00:25:56.405997 | orchestrator | Wednesday 11 March 2026 00:25:53 +0000 (0:00:02.289) 0:00:09.544 ******* 2026-03-11 00:25:56.406096 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:25:56.406149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:25:56.406172 | orchestrator | 2026-03-11 00:25:56.406192 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2026-03-11 00:25:56.406213 | orchestrator | Wednesday 11 March 2026 00:25:54 +0000 (0:00:00.275) 0:00:09.820 ******* 2026-03-11 00:25:56.406233 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:56.406291 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:56.406312 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:25:56.406333 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:56.406352 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:25:56.406369 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:25:56.406387 | orchestrator | 2026-03-11 00:25:56.406405 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2026-03-11 00:25:56.406424 | orchestrator | Wednesday 11 March 2026 00:25:55 +0000 (0:00:01.081) 0:00:10.902 ******* 2026-03-11 00:25:56.406441 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:25:56.406458 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:25:56.406494 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:25:56.406515 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:25:56.406534 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:25:56.406553 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:25:56.406589 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:25:56.406610 | orchestrator | 2026-03-11 00:25:56.406631 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2026-03-11 00:25:56.406652 | orchestrator | Wednesday 11 March 2026 00:25:55 +0000 (0:00:00.568) 0:00:11.470 ******* 2026-03-11 00:25:56.406672 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:25:56.406692 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:25:56.406713 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:25:56.406733 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:25:56.406754 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:25:56.406773 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:25:56.406793 | orchestrator | ok: [testbed-manager] 2026-03-11 00:25:56.406813 | orchestrator | 2026-03-11 00:25:56.406833 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2026-03-11 00:25:56.406854 | orchestrator | Wednesday 11 March 2026 00:25:56 +0000 (0:00:00.504) 0:00:11.975 ******* 2026-03-11 00:25:56.406901 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:25:56.406919 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:25:56.406959 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:26:08.649279 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:26:08.649431 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:26:08.649459 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:26:08.649478 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:26:08.649497 | orchestrator | 2026-03-11 00:26:08.649515 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2026-03-11 00:26:08.649558 | orchestrator | Wednesday 11 March 2026 00:25:56 +0000 (0:00:00.214) 0:00:12.190 ******* 2026-03-11 00:26:08.649584 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:26:08.649645 | orchestrator | 2026-03-11 00:26:08.649666 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2026-03-11 00:26:08.649686 | orchestrator | Wednesday 11 March 2026 00:25:56 +0000 (0:00:00.271) 0:00:12.461 ******* 2026-03-11 00:26:08.649705 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:26:08.649725 | orchestrator | 2026-03-11 00:26:08.649737 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2026-03-11 00:26:08.649751 | orchestrator | Wednesday 11 March 2026 00:25:57 +0000 (0:00:00.379) 0:00:12.840 ******* 2026-03-11 00:26:08.649764 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:08.649778 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:08.649790 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:08.649802 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:08.649814 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:08.649826 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:08.649837 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:08.649848 | orchestrator | 2026-03-11 00:26:08.649950 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2026-03-11 00:26:08.649994 | orchestrator | Wednesday 11 March 2026 00:25:58 +0000 (0:00:01.388) 0:00:14.229 ******* 2026-03-11 00:26:08.650084 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:26:08.650107 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:26:08.650125 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:26:08.650137 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:26:08.650148 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:26:08.650159 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:26:08.650169 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:26:08.650180 | orchestrator | 2026-03-11 00:26:08.650191 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2026-03-11 00:26:08.650232 | orchestrator | Wednesday 11 March 2026 00:25:58 +0000 (0:00:00.198) 0:00:14.427 ******* 2026-03-11 00:26:08.650244 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:08.650255 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:08.650265 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:08.650276 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:08.650286 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:08.650297 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:08.650307 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:08.650318 | orchestrator | 2026-03-11 00:26:08.650328 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2026-03-11 00:26:08.650339 | orchestrator | Wednesday 11 March 2026 00:25:59 +0000 (0:00:00.594) 0:00:15.022 ******* 2026-03-11 00:26:08.650350 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:26:08.650361 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:26:08.650371 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:26:08.650382 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:26:08.650392 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:26:08.650403 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:26:08.650413 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:26:08.650424 | orchestrator | 2026-03-11 00:26:08.650435 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2026-03-11 00:26:08.650447 | orchestrator | Wednesday 11 March 2026 00:25:59 +0000 (0:00:00.249) 0:00:15.272 ******* 2026-03-11 00:26:08.650458 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:26:08.650480 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:26:08.650491 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:08.650502 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:26:08.650514 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:26:08.650533 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:26:08.650560 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:26:08.650580 | orchestrator | 2026-03-11 00:26:08.650597 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2026-03-11 00:26:08.650616 | orchestrator | Wednesday 11 March 2026 00:26:00 +0000 (0:00:00.511) 0:00:15.783 ******* 2026-03-11 00:26:08.650633 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:08.650651 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:26:08.650667 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:26:08.650685 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:26:08.650703 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:26:08.650721 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:26:08.650740 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:26:08.650758 | orchestrator | 2026-03-11 00:26:08.650776 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2026-03-11 00:26:08.650796 | orchestrator | Wednesday 11 March 2026 00:26:01 +0000 (0:00:01.131) 0:00:16.915 ******* 2026-03-11 00:26:08.650814 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:08.650832 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:08.650853 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:08.650961 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:08.650980 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:08.650998 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:08.651018 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:08.651037 | orchestrator | 2026-03-11 00:26:08.651054 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2026-03-11 00:26:08.651073 | orchestrator | Wednesday 11 March 2026 00:26:02 +0000 (0:00:01.179) 0:00:18.094 ******* 2026-03-11 00:26:08.651111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:26:08.651125 | orchestrator | 2026-03-11 00:26:08.651136 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2026-03-11 00:26:08.651147 | orchestrator | Wednesday 11 March 2026 00:26:02 +0000 (0:00:00.311) 0:00:18.405 ******* 2026-03-11 00:26:08.651171 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:26:08.651182 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:26:08.651192 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:26:08.651203 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:26:08.651214 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:26:08.651224 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:26:08.651235 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:26:08.651246 | orchestrator | 2026-03-11 00:26:08.651256 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2026-03-11 00:26:08.651267 | orchestrator | Wednesday 11 March 2026 00:26:04 +0000 (0:00:01.491) 0:00:19.897 ******* 2026-03-11 00:26:08.651278 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:08.651289 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:08.651300 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:08.651310 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:08.651321 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:08.651332 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:08.651342 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:08.651353 | orchestrator | 2026-03-11 00:26:08.651363 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2026-03-11 00:26:08.651374 | orchestrator | Wednesday 11 March 2026 00:26:04 +0000 (0:00:00.197) 0:00:20.094 ******* 2026-03-11 00:26:08.651385 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:08.651396 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:08.651407 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:08.651418 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:08.651428 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:08.651439 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:08.651450 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:08.651460 | orchestrator | 2026-03-11 00:26:08.651471 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2026-03-11 00:26:08.651482 | orchestrator | Wednesday 11 March 2026 00:26:04 +0000 (0:00:00.215) 0:00:20.310 ******* 2026-03-11 00:26:08.651493 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:08.651506 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:08.651529 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:08.651556 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:08.651572 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:08.651591 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:08.651609 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:08.651627 | orchestrator | 2026-03-11 00:26:08.651643 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2026-03-11 00:26:08.651659 | orchestrator | Wednesday 11 March 2026 00:26:04 +0000 (0:00:00.196) 0:00:20.506 ******* 2026-03-11 00:26:08.651677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:26:08.651695 | orchestrator | 2026-03-11 00:26:08.651713 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2026-03-11 00:26:08.651731 | orchestrator | Wednesday 11 March 2026 00:26:05 +0000 (0:00:00.249) 0:00:20.756 ******* 2026-03-11 00:26:08.651749 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:08.651767 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:08.651784 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:08.651802 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:08.651820 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:08.651838 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:08.651885 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:08.651906 | orchestrator | 2026-03-11 00:26:08.651924 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2026-03-11 00:26:08.651942 | orchestrator | Wednesday 11 March 2026 00:26:05 +0000 (0:00:00.553) 0:00:21.309 ******* 2026-03-11 00:26:08.651960 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:26:08.651980 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:26:08.652012 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:26:08.652033 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:26:08.652052 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:26:08.652070 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:26:08.652088 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:26:08.652100 | orchestrator | 2026-03-11 00:26:08.652110 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2026-03-11 00:26:08.652121 | orchestrator | Wednesday 11 March 2026 00:26:05 +0000 (0:00:00.206) 0:00:21.516 ******* 2026-03-11 00:26:08.652132 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:08.652143 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:08.652153 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:08.652164 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:08.652175 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:26:08.652185 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:26:08.652195 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:26:08.652206 | orchestrator | 2026-03-11 00:26:08.652217 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2026-03-11 00:26:08.652227 | orchestrator | Wednesday 11 March 2026 00:26:06 +0000 (0:00:01.125) 0:00:22.641 ******* 2026-03-11 00:26:08.652238 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:08.652249 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:08.652259 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:08.652269 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:08.652280 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:08.652290 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:08.652301 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:08.652311 | orchestrator | 2026-03-11 00:26:08.652322 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2026-03-11 00:26:08.652333 | orchestrator | Wednesday 11 March 2026 00:26:07 +0000 (0:00:00.617) 0:00:23.258 ******* 2026-03-11 00:26:08.652343 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:08.652354 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:08.652364 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:08.652375 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:08.652398 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:26:50.197446 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:26:50.197559 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:26:50.197578 | orchestrator | 2026-03-11 00:26:50.197593 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2026-03-11 00:26:50.197603 | orchestrator | Wednesday 11 March 2026 00:26:08 +0000 (0:00:01.241) 0:00:24.500 ******* 2026-03-11 00:26:50.197611 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:50.197620 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:50.197627 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:50.197635 | orchestrator | changed: [testbed-manager] 2026-03-11 00:26:50.197643 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:26:50.197650 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:26:50.197658 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:26:50.197666 | orchestrator | 2026-03-11 00:26:50.197674 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2026-03-11 00:26:50.197682 | orchestrator | Wednesday 11 March 2026 00:26:26 +0000 (0:00:17.671) 0:00:42.172 ******* 2026-03-11 00:26:50.197690 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:50.197698 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:50.197706 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:50.197714 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:50.197722 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:50.197729 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:50.197737 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:50.197745 | orchestrator | 2026-03-11 00:26:50.197752 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2026-03-11 00:26:50.197760 | orchestrator | Wednesday 11 March 2026 00:26:26 +0000 (0:00:00.211) 0:00:42.383 ******* 2026-03-11 00:26:50.197768 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:50.197798 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:50.197834 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:50.197845 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:50.197853 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:50.197861 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:50.197868 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:50.197876 | orchestrator | 2026-03-11 00:26:50.197884 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2026-03-11 00:26:50.197892 | orchestrator | Wednesday 11 March 2026 00:26:26 +0000 (0:00:00.203) 0:00:42.587 ******* 2026-03-11 00:26:50.197899 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:50.197907 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:50.197915 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:50.197922 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:50.197931 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:50.197941 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:50.197949 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:50.197959 | orchestrator | 2026-03-11 00:26:50.197968 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2026-03-11 00:26:50.197977 | orchestrator | Wednesday 11 March 2026 00:26:27 +0000 (0:00:00.203) 0:00:42.791 ******* 2026-03-11 00:26:50.197987 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:26:50.197998 | orchestrator | 2026-03-11 00:26:50.198008 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2026-03-11 00:26:50.198065 | orchestrator | Wednesday 11 March 2026 00:26:27 +0000 (0:00:00.258) 0:00:43.050 ******* 2026-03-11 00:26:50.198075 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:50.198085 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:50.198093 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:50.198102 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:50.198111 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:50.198120 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:50.198129 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:50.198138 | orchestrator | 2026-03-11 00:26:50.198146 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2026-03-11 00:26:50.198154 | orchestrator | Wednesday 11 March 2026 00:26:29 +0000 (0:00:01.891) 0:00:44.941 ******* 2026-03-11 00:26:50.198162 | orchestrator | changed: [testbed-manager] 2026-03-11 00:26:50.198186 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:26:50.198194 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:26:50.198202 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:26:50.198210 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:26:50.198218 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:26:50.198229 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:26:50.198237 | orchestrator | 2026-03-11 00:26:50.198246 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2026-03-11 00:26:50.198254 | orchestrator | Wednesday 11 March 2026 00:26:30 +0000 (0:00:01.103) 0:00:46.044 ******* 2026-03-11 00:26:50.198261 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:50.198269 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:50.198277 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:50.198284 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:50.198292 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:50.198300 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:50.198307 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:50.198315 | orchestrator | 2026-03-11 00:26:50.198323 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2026-03-11 00:26:50.198331 | orchestrator | Wednesday 11 March 2026 00:26:31 +0000 (0:00:00.937) 0:00:46.981 ******* 2026-03-11 00:26:50.198339 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:26:50.198356 | orchestrator | 2026-03-11 00:26:50.198364 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2026-03-11 00:26:50.198373 | orchestrator | Wednesday 11 March 2026 00:26:31 +0000 (0:00:00.286) 0:00:47.268 ******* 2026-03-11 00:26:50.198380 | orchestrator | changed: [testbed-manager] 2026-03-11 00:26:50.198388 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:26:50.198396 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:26:50.198403 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:26:50.198411 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:26:50.198419 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:26:50.198426 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:26:50.198434 | orchestrator | 2026-03-11 00:26:50.198459 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2026-03-11 00:26:50.198473 | orchestrator | Wednesday 11 March 2026 00:26:32 +0000 (0:00:01.141) 0:00:48.410 ******* 2026-03-11 00:26:50.198485 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:26:50.198497 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:26:50.198509 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:26:50.198521 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:26:50.198533 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:26:50.198545 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:26:50.198556 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:26:50.198569 | orchestrator | 2026-03-11 00:26:50.198581 | orchestrator | TASK [osism.services.rsyslog : Include logrotate tasks] ************************ 2026-03-11 00:26:50.198594 | orchestrator | Wednesday 11 March 2026 00:26:32 +0000 (0:00:00.210) 0:00:48.620 ******* 2026-03-11 00:26:50.198607 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/logrotate.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:26:50.198619 | orchestrator | 2026-03-11 00:26:50.198631 | orchestrator | TASK [osism.services.rsyslog : Ensure logrotate package is installed] ********** 2026-03-11 00:26:50.198644 | orchestrator | Wednesday 11 March 2026 00:26:33 +0000 (0:00:00.294) 0:00:48.915 ******* 2026-03-11 00:26:50.198656 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:50.198669 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:50.198683 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:50.198696 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:50.198709 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:50.198723 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:50.198734 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:50.198742 | orchestrator | 2026-03-11 00:26:50.198750 | orchestrator | TASK [osism.services.rsyslog : Configure logrotate for rsyslog] **************** 2026-03-11 00:26:50.198758 | orchestrator | Wednesday 11 March 2026 00:26:35 +0000 (0:00:01.832) 0:00:50.748 ******* 2026-03-11 00:26:50.198766 | orchestrator | changed: [testbed-manager] 2026-03-11 00:26:50.198773 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:26:50.198781 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:26:50.198789 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:26:50.198796 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:26:50.198822 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:26:50.198831 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:26:50.198838 | orchestrator | 2026-03-11 00:26:50.198846 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2026-03-11 00:26:50.198854 | orchestrator | Wednesday 11 March 2026 00:26:36 +0000 (0:00:01.180) 0:00:51.928 ******* 2026-03-11 00:26:50.198862 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:26:50.198869 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:26:50.198877 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:26:50.198885 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:26:50.198893 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:26:50.198900 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:26:50.198916 | orchestrator | changed: [testbed-manager] 2026-03-11 00:26:50.198924 | orchestrator | 2026-03-11 00:26:50.198932 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2026-03-11 00:26:50.198939 | orchestrator | Wednesday 11 March 2026 00:26:47 +0000 (0:00:10.877) 0:01:02.806 ******* 2026-03-11 00:26:50.198947 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:50.198955 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:50.198962 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:50.198970 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:50.198978 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:50.198986 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:50.198993 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:50.199001 | orchestrator | 2026-03-11 00:26:50.199008 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2026-03-11 00:26:50.199016 | orchestrator | Wednesday 11 March 2026 00:26:48 +0000 (0:00:01.549) 0:01:04.355 ******* 2026-03-11 00:26:50.199024 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:50.199032 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:50.199039 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:50.199047 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:50.199054 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:50.199062 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:50.199070 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:50.199077 | orchestrator | 2026-03-11 00:26:50.199090 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2026-03-11 00:26:50.199099 | orchestrator | Wednesday 11 March 2026 00:26:49 +0000 (0:00:00.887) 0:01:05.244 ******* 2026-03-11 00:26:50.199106 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:50.199114 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:50.199121 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:50.199129 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:50.199137 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:50.199144 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:50.199152 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:50.199159 | orchestrator | 2026-03-11 00:26:50.199167 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2026-03-11 00:26:50.199175 | orchestrator | Wednesday 11 March 2026 00:26:49 +0000 (0:00:00.196) 0:01:05.440 ******* 2026-03-11 00:26:50.199183 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:26:50.199190 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:26:50.199198 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:26:50.199205 | orchestrator | ok: [testbed-manager] 2026-03-11 00:26:50.199213 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:26:50.199220 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:26:50.199228 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:26:50.199235 | orchestrator | 2026-03-11 00:26:50.199243 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2026-03-11 00:26:50.199251 | orchestrator | Wednesday 11 March 2026 00:26:49 +0000 (0:00:00.197) 0:01:05.638 ******* 2026-03-11 00:26:50.199259 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:26:50.199267 | orchestrator | 2026-03-11 00:26:50.199283 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2026-03-11 00:29:16.316450 | orchestrator | Wednesday 11 March 2026 00:26:50 +0000 (0:00:00.251) 0:01:05.890 ******* 2026-03-11 00:29:16.316571 | orchestrator | ok: [testbed-manager] 2026-03-11 00:29:16.316588 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:29:16.316600 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:29:16.316611 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:29:16.316622 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:29:16.316633 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:29:16.316644 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:29:16.316706 | orchestrator | 2026-03-11 00:29:16.316719 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2026-03-11 00:29:16.316751 | orchestrator | Wednesday 11 March 2026 00:26:52 +0000 (0:00:02.091) 0:01:07.981 ******* 2026-03-11 00:29:16.316763 | orchestrator | changed: [testbed-manager] 2026-03-11 00:29:16.316775 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:29:16.316785 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:29:16.316796 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:29:16.316807 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:29:16.316817 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:29:16.316829 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:29:16.316854 | orchestrator | 2026-03-11 00:29:16.316877 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2026-03-11 00:29:16.316889 | orchestrator | Wednesday 11 March 2026 00:26:52 +0000 (0:00:00.642) 0:01:08.624 ******* 2026-03-11 00:29:16.316900 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:29:16.316910 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:29:16.316921 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:29:16.316931 | orchestrator | ok: [testbed-manager] 2026-03-11 00:29:16.316942 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:29:16.316953 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:29:16.316966 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:29:16.316978 | orchestrator | 2026-03-11 00:29:16.316990 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2026-03-11 00:29:16.317007 | orchestrator | Wednesday 11 March 2026 00:26:53 +0000 (0:00:00.201) 0:01:08.825 ******* 2026-03-11 00:29:16.317027 | orchestrator | ok: [testbed-manager] 2026-03-11 00:29:16.317045 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:29:16.317065 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:29:16.317085 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:29:16.317104 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:29:16.317122 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:29:16.317133 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:29:16.317144 | orchestrator | 2026-03-11 00:29:16.317155 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2026-03-11 00:29:16.317165 | orchestrator | Wednesday 11 March 2026 00:26:54 +0000 (0:00:01.290) 0:01:10.116 ******* 2026-03-11 00:29:16.317176 | orchestrator | changed: [testbed-manager] 2026-03-11 00:29:16.317187 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:29:16.317197 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:29:16.317208 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:29:16.317219 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:29:16.317229 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:29:16.317240 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:29:16.317251 | orchestrator | 2026-03-11 00:29:16.317261 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2026-03-11 00:29:16.317272 | orchestrator | Wednesday 11 March 2026 00:26:56 +0000 (0:00:02.288) 0:01:12.405 ******* 2026-03-11 00:29:16.317283 | orchestrator | ok: [testbed-manager] 2026-03-11 00:29:16.317293 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:29:16.317304 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:29:16.317314 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:29:16.317325 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:29:16.317336 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:29:16.317346 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:29:16.317357 | orchestrator | 2026-03-11 00:29:16.317368 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2026-03-11 00:29:16.317378 | orchestrator | Wednesday 11 March 2026 00:26:59 +0000 (0:00:03.240) 0:01:15.645 ******* 2026-03-11 00:29:16.317438 | orchestrator | ok: [testbed-manager] 2026-03-11 00:29:16.317449 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:29:16.317460 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:29:16.317470 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:29:16.317481 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:29:16.317491 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:29:16.317502 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:29:16.317512 | orchestrator | 2026-03-11 00:29:16.317523 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2026-03-11 00:29:16.317560 | orchestrator | Wednesday 11 March 2026 00:27:36 +0000 (0:00:36.230) 0:01:51.875 ******* 2026-03-11 00:29:16.317571 | orchestrator | changed: [testbed-manager] 2026-03-11 00:29:16.317582 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:29:16.317593 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:29:16.317604 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:29:16.317614 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:29:16.317625 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:29:16.317635 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:29:16.317713 | orchestrator | 2026-03-11 00:29:16.317736 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2026-03-11 00:29:16.317755 | orchestrator | Wednesday 11 March 2026 00:28:59 +0000 (0:01:23.313) 0:03:15.189 ******* 2026-03-11 00:29:16.317774 | orchestrator | ok: [testbed-manager] 2026-03-11 00:29:16.317793 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:29:16.317804 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:29:16.317815 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:29:16.317825 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:29:16.317836 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:29:16.317846 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:29:16.317857 | orchestrator | 2026-03-11 00:29:16.317868 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2026-03-11 00:29:16.317879 | orchestrator | Wednesday 11 March 2026 00:29:01 +0000 (0:00:02.337) 0:03:17.527 ******* 2026-03-11 00:29:16.317889 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:29:16.317900 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:29:16.317910 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:29:16.317920 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:29:16.317931 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:29:16.317941 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:29:16.317952 | orchestrator | changed: [testbed-manager] 2026-03-11 00:29:16.317962 | orchestrator | 2026-03-11 00:29:16.317976 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2026-03-11 00:29:16.317994 | orchestrator | Wednesday 11 March 2026 00:29:14 +0000 (0:00:12.239) 0:03:29.767 ******* 2026-03-11 00:29:16.318131 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2026-03-11 00:29:16.318167 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2026-03-11 00:29:16.318182 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2026-03-11 00:29:16.318195 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-11 00:29:16.318218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'network', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2026-03-11 00:29:16.318229 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2026-03-11 00:29:16.318244 | orchestrator | 2026-03-11 00:29:16.318255 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2026-03-11 00:29:16.318266 | orchestrator | Wednesday 11 March 2026 00:29:14 +0000 (0:00:00.391) 0:03:30.158 ******* 2026-03-11 00:29:16.318277 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-11 00:29:16.318288 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:29:16.318299 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-11 00:29:16.318310 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-11 00:29:16.318320 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:29:16.318331 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:29:16.318342 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2026-03-11 00:29:16.318352 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:29:16.318363 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-11 00:29:16.318374 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-11 00:29:16.318384 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-11 00:29:16.318395 | orchestrator | 2026-03-11 00:29:16.318405 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2026-03-11 00:29:16.318416 | orchestrator | Wednesday 11 March 2026 00:29:16 +0000 (0:00:01.785) 0:03:31.944 ******* 2026-03-11 00:29:16.318435 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-11 00:29:16.318447 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-11 00:29:16.318458 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-11 00:29:16.318468 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-11 00:29:16.318479 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-11 00:29:16.318497 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-11 00:29:26.419145 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-11 00:29:26.419250 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-11 00:29:26.419264 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-11 00:29:26.419276 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-11 00:29:26.419286 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-11 00:29:26.419296 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-11 00:29:26.419306 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-11 00:29:26.419337 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-11 00:29:26.419347 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-11 00:29:26.419358 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-11 00:29:26.419368 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-11 00:29:26.419378 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:29:26.419389 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-11 00:29:26.419399 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-11 00:29:26.419408 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-11 00:29:26.419418 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-11 00:29:26.419427 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-11 00:29:26.419437 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-11 00:29:26.419447 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-11 00:29:26.419456 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-11 00:29:26.419465 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-11 00:29:26.419475 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:29:26.419484 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-11 00:29:26.419494 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-11 00:29:26.419503 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-11 00:29:26.419513 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-11 00:29:26.419522 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2026-03-11 00:29:26.419532 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2026-03-11 00:29:26.419541 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2026-03-11 00:29:26.419564 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:29:26.419574 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2026-03-11 00:29:26.419584 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2026-03-11 00:29:26.419593 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2026-03-11 00:29:26.419602 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2026-03-11 00:29:26.419612 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2026-03-11 00:29:26.419621 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2026-03-11 00:29:26.419631 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2026-03-11 00:29:26.419666 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:29:26.419675 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-11 00:29:26.419701 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-11 00:29:26.419721 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-11 00:29:26.419742 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-11 00:29:26.419753 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2026-03-11 00:29:26.419781 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-11 00:29:26.419792 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-11 00:29:26.419803 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2026-03-11 00:29:26.419814 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-11 00:29:26.419825 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-11 00:29:26.419835 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-11 00:29:26.419846 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-11 00:29:26.419856 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2026-03-11 00:29:26.419867 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-11 00:29:26.419878 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-11 00:29:26.419889 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2026-03-11 00:29:26.419900 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-11 00:29:26.419911 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-11 00:29:26.419922 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2026-03-11 00:29:26.419933 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-11 00:29:26.419944 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-11 00:29:26.419959 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2026-03-11 00:29:26.419976 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-11 00:29:26.419989 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2026-03-11 00:29:26.419999 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2026-03-11 00:29:26.420009 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-11 00:29:26.420018 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2026-03-11 00:29:26.420028 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-11 00:29:26.420037 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-11 00:29:26.420047 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2026-03-11 00:29:26.420056 | orchestrator | 2026-03-11 00:29:26.420066 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2026-03-11 00:29:26.420076 | orchestrator | Wednesday 11 March 2026 00:29:23 +0000 (0:00:07.129) 0:03:39.074 ******* 2026-03-11 00:29:26.420090 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:29:26.420106 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:29:26.420121 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:29:26.420149 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:29:26.420185 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:29:26.420200 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:29:26.420215 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2026-03-11 00:29:26.420232 | orchestrator | 2026-03-11 00:29:26.420248 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2026-03-11 00:29:26.420264 | orchestrator | Wednesday 11 March 2026 00:29:24 +0000 (0:00:01.563) 0:03:40.637 ******* 2026-03-11 00:29:26.420280 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:29:26.420298 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:29:26.420309 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:29:26.420319 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:29:26.420329 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:29:26.420338 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:29:26.420348 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:29:26.420357 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:29:26.420367 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-11 00:29:26.420377 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-11 00:29:26.420401 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-11 00:29:40.908287 | orchestrator | 2026-03-11 00:29:40.908387 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on network] ***************** 2026-03-11 00:29:40.908400 | orchestrator | Wednesday 11 March 2026 00:29:26 +0000 (0:00:01.503) 0:03:42.141 ******* 2026-03-11 00:29:40.908407 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:29:40.908415 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:29:40.908422 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:29:40.908429 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:29:40.908435 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:29:40.908442 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:29:40.908448 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2026-03-11 00:29:40.908454 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:29:40.908460 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-11 00:29:40.908467 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-11 00:29:40.908474 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2026-03-11 00:29:40.908480 | orchestrator | 2026-03-11 00:29:40.908487 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2026-03-11 00:29:40.908493 | orchestrator | Wednesday 11 March 2026 00:29:28 +0000 (0:00:01.646) 0:03:43.788 ******* 2026-03-11 00:29:40.908499 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-11 00:29:40.908506 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:29:40.908512 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-11 00:29:40.908519 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-11 00:29:40.908525 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:29:40.908554 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:29:40.908561 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2026-03-11 00:29:40.908567 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:29:40.908574 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-11 00:29:40.908581 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-11 00:29:40.908587 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2026-03-11 00:29:40.908593 | orchestrator | 2026-03-11 00:29:40.908600 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2026-03-11 00:29:40.908607 | orchestrator | Wednesday 11 March 2026 00:29:29 +0000 (0:00:01.584) 0:03:45.372 ******* 2026-03-11 00:29:40.908613 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:29:40.908690 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:29:40.908699 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:29:40.908706 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:29:40.908712 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:29:40.908718 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:29:40.908724 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:29:40.908730 | orchestrator | 2026-03-11 00:29:40.908736 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2026-03-11 00:29:40.908743 | orchestrator | Wednesday 11 March 2026 00:29:29 +0000 (0:00:00.281) 0:03:45.654 ******* 2026-03-11 00:29:40.908749 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:29:40.908756 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:29:40.908762 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:29:40.908768 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:29:40.908774 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:29:40.908780 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:29:40.908786 | orchestrator | ok: [testbed-manager] 2026-03-11 00:29:40.908792 | orchestrator | 2026-03-11 00:29:40.908798 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2026-03-11 00:29:40.908804 | orchestrator | Wednesday 11 March 2026 00:29:34 +0000 (0:00:04.978) 0:03:50.632 ******* 2026-03-11 00:29:40.908811 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2026-03-11 00:29:40.908818 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2026-03-11 00:29:40.908825 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:29:40.908831 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2026-03-11 00:29:40.908836 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:29:40.908843 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2026-03-11 00:29:40.908849 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:29:40.908856 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:29:40.908863 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2026-03-11 00:29:40.908869 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2026-03-11 00:29:40.908876 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:29:40.908882 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:29:40.908889 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2026-03-11 00:29:40.908896 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:29:40.908902 | orchestrator | 2026-03-11 00:29:40.908909 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2026-03-11 00:29:40.908915 | orchestrator | Wednesday 11 March 2026 00:29:35 +0000 (0:00:00.412) 0:03:51.045 ******* 2026-03-11 00:29:40.908923 | orchestrator | ok: [testbed-manager] => (item=cron) 2026-03-11 00:29:40.908930 | orchestrator | ok: [testbed-node-3] => (item=cron) 2026-03-11 00:29:40.908937 | orchestrator | ok: [testbed-node-4] => (item=cron) 2026-03-11 00:29:40.908959 | orchestrator | ok: [testbed-node-5] => (item=cron) 2026-03-11 00:29:40.908967 | orchestrator | ok: [testbed-node-0] => (item=cron) 2026-03-11 00:29:40.908973 | orchestrator | ok: [testbed-node-1] => (item=cron) 2026-03-11 00:29:40.908989 | orchestrator | ok: [testbed-node-2] => (item=cron) 2026-03-11 00:29:40.908995 | orchestrator | 2026-03-11 00:29:40.909002 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2026-03-11 00:29:40.909010 | orchestrator | Wednesday 11 March 2026 00:29:36 +0000 (0:00:01.137) 0:03:52.182 ******* 2026-03-11 00:29:40.909019 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:29:40.909028 | orchestrator | 2026-03-11 00:29:40.909035 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2026-03-11 00:29:40.909041 | orchestrator | Wednesday 11 March 2026 00:29:36 +0000 (0:00:00.429) 0:03:52.612 ******* 2026-03-11 00:29:40.909047 | orchestrator | ok: [testbed-manager] 2026-03-11 00:29:40.909054 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:29:40.909060 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:29:40.909066 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:29:40.909073 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:29:40.909079 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:29:40.909085 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:29:40.909091 | orchestrator | 2026-03-11 00:29:40.909097 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2026-03-11 00:29:40.909104 | orchestrator | Wednesday 11 March 2026 00:29:38 +0000 (0:00:01.607) 0:03:54.220 ******* 2026-03-11 00:29:40.909110 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:29:40.909116 | orchestrator | ok: [testbed-manager] 2026-03-11 00:29:40.909123 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:29:40.909129 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:29:40.909135 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:29:40.909141 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:29:40.909147 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:29:40.909153 | orchestrator | 2026-03-11 00:29:40.909160 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2026-03-11 00:29:40.909166 | orchestrator | Wednesday 11 March 2026 00:29:39 +0000 (0:00:00.601) 0:03:54.822 ******* 2026-03-11 00:29:40.909172 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:29:40.909178 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:29:40.909184 | orchestrator | changed: [testbed-manager] 2026-03-11 00:29:40.909190 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:29:40.909197 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:29:40.909202 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:29:40.909208 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:29:40.909213 | orchestrator | 2026-03-11 00:29:40.909219 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2026-03-11 00:29:40.909224 | orchestrator | Wednesday 11 March 2026 00:29:39 +0000 (0:00:00.636) 0:03:55.458 ******* 2026-03-11 00:29:40.909230 | orchestrator | ok: [testbed-manager] 2026-03-11 00:29:40.909253 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:29:40.909259 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:29:40.909265 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:29:40.909271 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:29:40.909277 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:29:40.909283 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:29:40.909289 | orchestrator | 2026-03-11 00:29:40.909295 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2026-03-11 00:29:40.909302 | orchestrator | Wednesday 11 March 2026 00:29:40 +0000 (0:00:00.582) 0:03:56.041 ******* 2026-03-11 00:29:40.909314 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187481.158008, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:40.909331 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187499.397877, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:40.909337 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187509.9278462, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:40.909360 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187498.4344943, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:46.214371 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187495.2613397, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:46.214512 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187509.966212, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:46.214542 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1773187506.8149958, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:46.214584 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:46.214705 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:46.214730 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:46.214780 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:46.214829 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:46.214842 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:46.214854 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 00:29:46.214868 | orchestrator | 2026-03-11 00:29:46.214883 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2026-03-11 00:29:46.214898 | orchestrator | Wednesday 11 March 2026 00:29:41 +0000 (0:00:01.092) 0:03:57.134 ******* 2026-03-11 00:29:46.214911 | orchestrator | changed: [testbed-manager] 2026-03-11 00:29:46.214925 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:29:46.214937 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:29:46.214959 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:29:46.214973 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:29:46.214985 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:29:46.214997 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:29:46.215010 | orchestrator | 2026-03-11 00:29:46.215023 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2026-03-11 00:29:46.215035 | orchestrator | Wednesday 11 March 2026 00:29:42 +0000 (0:00:01.190) 0:03:58.324 ******* 2026-03-11 00:29:46.215047 | orchestrator | changed: [testbed-manager] 2026-03-11 00:29:46.215060 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:29:46.215072 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:29:46.215091 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:29:46.215103 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:29:46.215116 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:29:46.215128 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:29:46.215140 | orchestrator | 2026-03-11 00:29:46.215153 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2026-03-11 00:29:46.215167 | orchestrator | Wednesday 11 March 2026 00:29:43 +0000 (0:00:01.127) 0:03:59.452 ******* 2026-03-11 00:29:46.215186 | orchestrator | changed: [testbed-manager] 2026-03-11 00:29:46.215205 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:29:46.215223 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:29:46.215242 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:29:46.215261 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:29:46.215273 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:29:46.215283 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:29:46.215293 | orchestrator | 2026-03-11 00:29:46.215302 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2026-03-11 00:29:46.215312 | orchestrator | Wednesday 11 March 2026 00:29:44 +0000 (0:00:01.165) 0:04:00.618 ******* 2026-03-11 00:29:46.215322 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:29:46.215332 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:29:46.215342 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:29:46.215351 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:29:46.215360 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:29:46.215370 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:29:46.215379 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:29:46.215388 | orchestrator | 2026-03-11 00:29:46.215398 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2026-03-11 00:29:46.215407 | orchestrator | Wednesday 11 March 2026 00:29:45 +0000 (0:00:00.244) 0:04:00.863 ******* 2026-03-11 00:29:46.215417 | orchestrator | ok: [testbed-manager] 2026-03-11 00:29:46.215428 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:29:46.215437 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:29:46.215446 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:29:46.215456 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:29:46.215465 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:29:46.215474 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:29:46.215484 | orchestrator | 2026-03-11 00:29:46.215493 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2026-03-11 00:29:46.215503 | orchestrator | Wednesday 11 March 2026 00:29:45 +0000 (0:00:00.725) 0:04:01.589 ******* 2026-03-11 00:29:46.215514 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:29:46.215526 | orchestrator | 2026-03-11 00:29:46.215536 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2026-03-11 00:29:46.215553 | orchestrator | Wednesday 11 March 2026 00:29:46 +0000 (0:00:00.320) 0:04:01.909 ******* 2026-03-11 00:31:07.702933 | orchestrator | ok: [testbed-manager] 2026-03-11 00:31:07.703046 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:31:07.703060 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:31:07.703070 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:31:07.703103 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:31:07.703114 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:31:07.703123 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:31:07.703133 | orchestrator | 2026-03-11 00:31:07.703145 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2026-03-11 00:31:07.703156 | orchestrator | Wednesday 11 March 2026 00:29:55 +0000 (0:00:09.504) 0:04:11.414 ******* 2026-03-11 00:31:07.703166 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:31:07.703175 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:31:07.703185 | orchestrator | ok: [testbed-manager] 2026-03-11 00:31:07.703195 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:31:07.703204 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:31:07.703213 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:31:07.703223 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:31:07.703232 | orchestrator | 2026-03-11 00:31:07.703242 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2026-03-11 00:31:07.703252 | orchestrator | Wednesday 11 March 2026 00:29:57 +0000 (0:00:01.504) 0:04:12.918 ******* 2026-03-11 00:31:07.703261 | orchestrator | ok: [testbed-manager] 2026-03-11 00:31:07.703271 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:31:07.703280 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:31:07.703289 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:31:07.703299 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:31:07.703308 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:31:07.703317 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:31:07.703327 | orchestrator | 2026-03-11 00:31:07.703336 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2026-03-11 00:31:07.703346 | orchestrator | Wednesday 11 March 2026 00:29:58 +0000 (0:00:01.031) 0:04:13.950 ******* 2026-03-11 00:31:07.703355 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:31:07.703365 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:31:07.703374 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:31:07.703384 | orchestrator | ok: [testbed-manager] 2026-03-11 00:31:07.703393 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:31:07.703402 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:31:07.703411 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:31:07.703421 | orchestrator | 2026-03-11 00:31:07.703431 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2026-03-11 00:31:07.703441 | orchestrator | Wednesday 11 March 2026 00:29:58 +0000 (0:00:00.327) 0:04:14.278 ******* 2026-03-11 00:31:07.703451 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:31:07.703460 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:31:07.703469 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:31:07.703506 | orchestrator | ok: [testbed-manager] 2026-03-11 00:31:07.703515 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:31:07.703525 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:31:07.703534 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:31:07.703544 | orchestrator | 2026-03-11 00:31:07.703553 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2026-03-11 00:31:07.703563 | orchestrator | Wednesday 11 March 2026 00:29:58 +0000 (0:00:00.272) 0:04:14.550 ******* 2026-03-11 00:31:07.703573 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:31:07.703582 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:31:07.703591 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:31:07.703601 | orchestrator | ok: [testbed-manager] 2026-03-11 00:31:07.703610 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:31:07.703620 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:31:07.703629 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:31:07.703639 | orchestrator | 2026-03-11 00:31:07.703648 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2026-03-11 00:31:07.703658 | orchestrator | Wednesday 11 March 2026 00:29:59 +0000 (0:00:00.297) 0:04:14.848 ******* 2026-03-11 00:31:07.703668 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:31:07.703678 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:31:07.703687 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:31:07.703704 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:31:07.703713 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:31:07.703723 | orchestrator | ok: [testbed-manager] 2026-03-11 00:31:07.703732 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:31:07.703741 | orchestrator | 2026-03-11 00:31:07.703751 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2026-03-11 00:31:07.703761 | orchestrator | Wednesday 11 March 2026 00:30:03 +0000 (0:00:04.659) 0:04:19.508 ******* 2026-03-11 00:31:07.703773 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:31:07.703785 | orchestrator | 2026-03-11 00:31:07.703795 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2026-03-11 00:31:07.703804 | orchestrator | Wednesday 11 March 2026 00:30:04 +0000 (0:00:00.366) 0:04:19.875 ******* 2026-03-11 00:31:07.703814 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2026-03-11 00:31:07.703823 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2026-03-11 00:31:07.703833 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:31:07.703843 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2026-03-11 00:31:07.703852 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2026-03-11 00:31:07.703862 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2026-03-11 00:31:07.703871 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2026-03-11 00:31:07.703881 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:31:07.703890 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2026-03-11 00:31:07.703900 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2026-03-11 00:31:07.703909 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:31:07.703918 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2026-03-11 00:31:07.703928 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:31:07.703937 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2026-03-11 00:31:07.703947 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2026-03-11 00:31:07.703956 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2026-03-11 00:31:07.703982 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:31:07.703993 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:31:07.704002 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2026-03-11 00:31:07.704012 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2026-03-11 00:31:07.704022 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:31:07.704031 | orchestrator | 2026-03-11 00:31:07.704041 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2026-03-11 00:31:07.704051 | orchestrator | Wednesday 11 March 2026 00:30:04 +0000 (0:00:00.339) 0:04:20.214 ******* 2026-03-11 00:31:07.704061 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:31:07.704071 | orchestrator | 2026-03-11 00:31:07.704081 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2026-03-11 00:31:07.704091 | orchestrator | Wednesday 11 March 2026 00:30:04 +0000 (0:00:00.393) 0:04:20.607 ******* 2026-03-11 00:31:07.704100 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2026-03-11 00:31:07.704110 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:31:07.704120 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2026-03-11 00:31:07.704129 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2026-03-11 00:31:07.704139 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:31:07.704148 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:31:07.704158 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2026-03-11 00:31:07.704174 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:31:07.704183 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2026-03-11 00:31:07.704193 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2026-03-11 00:31:07.704203 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:31:07.704212 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:31:07.704222 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2026-03-11 00:31:07.704232 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:31:07.704241 | orchestrator | 2026-03-11 00:31:07.704251 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2026-03-11 00:31:07.704261 | orchestrator | Wednesday 11 March 2026 00:30:05 +0000 (0:00:00.328) 0:04:20.936 ******* 2026-03-11 00:31:07.704286 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:31:07.704296 | orchestrator | 2026-03-11 00:31:07.704306 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2026-03-11 00:31:07.704316 | orchestrator | Wednesday 11 March 2026 00:30:05 +0000 (0:00:00.388) 0:04:21.325 ******* 2026-03-11 00:31:07.704330 | orchestrator | changed: [testbed-manager] 2026-03-11 00:31:07.704339 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:31:07.704349 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:31:07.704358 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:31:07.704368 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:31:07.704377 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:31:07.704387 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:31:07.704396 | orchestrator | 2026-03-11 00:31:07.704406 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2026-03-11 00:31:07.704415 | orchestrator | Wednesday 11 March 2026 00:30:39 +0000 (0:00:33.849) 0:04:55.175 ******* 2026-03-11 00:31:07.704425 | orchestrator | changed: [testbed-manager] 2026-03-11 00:31:07.704434 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:31:07.704444 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:31:07.704453 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:31:07.704463 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:31:07.704487 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:31:07.704497 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:31:07.704506 | orchestrator | 2026-03-11 00:31:07.704516 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2026-03-11 00:31:07.704526 | orchestrator | Wednesday 11 March 2026 00:30:49 +0000 (0:00:09.665) 0:05:04.840 ******* 2026-03-11 00:31:07.704535 | orchestrator | changed: [testbed-manager] 2026-03-11 00:31:07.704545 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:31:07.704554 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:31:07.704564 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:31:07.704573 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:31:07.704583 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:31:07.704592 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:31:07.704602 | orchestrator | 2026-03-11 00:31:07.704611 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2026-03-11 00:31:07.704621 | orchestrator | Wednesday 11 March 2026 00:30:58 +0000 (0:00:09.256) 0:05:14.096 ******* 2026-03-11 00:31:07.704630 | orchestrator | ok: [testbed-manager] 2026-03-11 00:31:07.704640 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:31:07.704649 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:31:07.704659 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:31:07.704668 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:31:07.704678 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:31:07.704687 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:31:07.704696 | orchestrator | 2026-03-11 00:31:07.704706 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2026-03-11 00:31:07.704722 | orchestrator | Wednesday 11 March 2026 00:31:00 +0000 (0:00:02.078) 0:05:16.175 ******* 2026-03-11 00:31:07.704732 | orchestrator | changed: [testbed-manager] 2026-03-11 00:31:07.704742 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:31:07.704751 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:31:07.704760 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:31:07.704770 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:31:07.704779 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:31:07.704789 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:31:07.704798 | orchestrator | 2026-03-11 00:31:07.704814 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2026-03-11 00:31:19.399698 | orchestrator | Wednesday 11 March 2026 00:31:07 +0000 (0:00:07.218) 0:05:23.393 ******* 2026-03-11 00:31:19.399821 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:31:19.399847 | orchestrator | 2026-03-11 00:31:19.399860 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2026-03-11 00:31:19.399873 | orchestrator | Wednesday 11 March 2026 00:31:08 +0000 (0:00:00.441) 0:05:23.835 ******* 2026-03-11 00:31:19.399884 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:31:19.399896 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:31:19.399907 | orchestrator | changed: [testbed-manager] 2026-03-11 00:31:19.399918 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:31:19.399928 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:31:19.399939 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:31:19.399950 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:31:19.399961 | orchestrator | 2026-03-11 00:31:19.399972 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2026-03-11 00:31:19.399983 | orchestrator | Wednesday 11 March 2026 00:31:08 +0000 (0:00:00.715) 0:05:24.551 ******* 2026-03-11 00:31:19.399994 | orchestrator | ok: [testbed-manager] 2026-03-11 00:31:19.400006 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:31:19.400016 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:31:19.400027 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:31:19.400037 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:31:19.400048 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:31:19.400058 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:31:19.400069 | orchestrator | 2026-03-11 00:31:19.400080 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2026-03-11 00:31:19.400090 | orchestrator | Wednesday 11 March 2026 00:31:11 +0000 (0:00:02.226) 0:05:26.777 ******* 2026-03-11 00:31:19.400101 | orchestrator | changed: [testbed-manager] 2026-03-11 00:31:19.400112 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:31:19.400123 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:31:19.400134 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:31:19.400144 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:31:19.400155 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:31:19.400166 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:31:19.400176 | orchestrator | 2026-03-11 00:31:19.400187 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2026-03-11 00:31:19.400198 | orchestrator | Wednesday 11 March 2026 00:31:11 +0000 (0:00:00.838) 0:05:27.615 ******* 2026-03-11 00:31:19.400213 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:31:19.400231 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:31:19.400251 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:31:19.400270 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:31:19.400290 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:31:19.400304 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:31:19.400316 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:31:19.400329 | orchestrator | 2026-03-11 00:31:19.400341 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2026-03-11 00:31:19.400371 | orchestrator | Wednesday 11 March 2026 00:31:12 +0000 (0:00:00.256) 0:05:27.872 ******* 2026-03-11 00:31:19.400408 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:31:19.400421 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:31:19.400432 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:31:19.400442 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:31:19.400453 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:31:19.400490 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:31:19.400502 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:31:19.400513 | orchestrator | 2026-03-11 00:31:19.400524 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2026-03-11 00:31:19.400535 | orchestrator | Wednesday 11 March 2026 00:31:12 +0000 (0:00:00.361) 0:05:28.233 ******* 2026-03-11 00:31:19.400545 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:31:19.400556 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:31:19.400567 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:31:19.400578 | orchestrator | ok: [testbed-manager] 2026-03-11 00:31:19.400589 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:31:19.400600 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:31:19.400610 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:31:19.400621 | orchestrator | 2026-03-11 00:31:19.400631 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2026-03-11 00:31:19.400642 | orchestrator | Wednesday 11 March 2026 00:31:12 +0000 (0:00:00.266) 0:05:28.499 ******* 2026-03-11 00:31:19.400653 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:31:19.400664 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:31:19.400675 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:31:19.400686 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:31:19.400696 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:31:19.400707 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:31:19.400718 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:31:19.400728 | orchestrator | 2026-03-11 00:31:19.400739 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2026-03-11 00:31:19.400751 | orchestrator | Wednesday 11 March 2026 00:31:13 +0000 (0:00:00.256) 0:05:28.756 ******* 2026-03-11 00:31:19.400762 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:31:19.400773 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:31:19.400783 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:31:19.400794 | orchestrator | ok: [testbed-manager] 2026-03-11 00:31:19.400805 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:31:19.400815 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:31:19.400826 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:31:19.400836 | orchestrator | 2026-03-11 00:31:19.400847 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2026-03-11 00:31:19.400858 | orchestrator | Wednesday 11 March 2026 00:31:13 +0000 (0:00:00.300) 0:05:29.057 ******* 2026-03-11 00:31:19.400869 | orchestrator | ok: [testbed-node-3] =>  2026-03-11 00:31:19.400879 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:31:19.400890 | orchestrator | ok: [testbed-node-4] =>  2026-03-11 00:31:19.400901 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:31:19.400912 | orchestrator | ok: [testbed-node-5] =>  2026-03-11 00:31:19.400923 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:31:19.400933 | orchestrator | ok: [testbed-manager] =>  2026-03-11 00:31:19.400944 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:31:19.400972 | orchestrator | ok: [testbed-node-0] =>  2026-03-11 00:31:19.400984 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:31:19.400994 | orchestrator | ok: [testbed-node-1] =>  2026-03-11 00:31:19.401005 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:31:19.401016 | orchestrator | ok: [testbed-node-2] =>  2026-03-11 00:31:19.401026 | orchestrator |  docker_version: 5:27.5.1 2026-03-11 00:31:19.401037 | orchestrator | 2026-03-11 00:31:19.401048 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2026-03-11 00:31:19.401059 | orchestrator | Wednesday 11 March 2026 00:31:13 +0000 (0:00:00.280) 0:05:29.337 ******* 2026-03-11 00:31:19.401070 | orchestrator | ok: [testbed-node-3] =>  2026-03-11 00:31:19.401089 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:31:19.401100 | orchestrator | ok: [testbed-node-4] =>  2026-03-11 00:31:19.401111 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:31:19.401121 | orchestrator | ok: [testbed-node-5] =>  2026-03-11 00:31:19.401132 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:31:19.401143 | orchestrator | ok: [testbed-manager] =>  2026-03-11 00:31:19.401153 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:31:19.401164 | orchestrator | ok: [testbed-node-0] =>  2026-03-11 00:31:19.401174 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:31:19.401185 | orchestrator | ok: [testbed-node-1] =>  2026-03-11 00:31:19.401195 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:31:19.401206 | orchestrator | ok: [testbed-node-2] =>  2026-03-11 00:31:19.401216 | orchestrator |  docker_cli_version: 5:27.5.1 2026-03-11 00:31:19.401227 | orchestrator | 2026-03-11 00:31:19.401238 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2026-03-11 00:31:19.401253 | orchestrator | Wednesday 11 March 2026 00:31:13 +0000 (0:00:00.263) 0:05:29.600 ******* 2026-03-11 00:31:19.401272 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:31:19.401292 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:31:19.401311 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:31:19.401326 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:31:19.401336 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:31:19.401347 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:31:19.401358 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:31:19.401368 | orchestrator | 2026-03-11 00:31:19.401379 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2026-03-11 00:31:19.401390 | orchestrator | Wednesday 11 March 2026 00:31:14 +0000 (0:00:00.280) 0:05:29.881 ******* 2026-03-11 00:31:19.401401 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:31:19.401411 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:31:19.401422 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:31:19.401433 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:31:19.401443 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:31:19.401454 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:31:19.401489 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:31:19.401500 | orchestrator | 2026-03-11 00:31:19.401511 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2026-03-11 00:31:19.401522 | orchestrator | Wednesday 11 March 2026 00:31:14 +0000 (0:00:00.245) 0:05:30.127 ******* 2026-03-11 00:31:19.401541 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:31:19.401554 | orchestrator | 2026-03-11 00:31:19.401566 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2026-03-11 00:31:19.401577 | orchestrator | Wednesday 11 March 2026 00:31:14 +0000 (0:00:00.519) 0:05:30.646 ******* 2026-03-11 00:31:19.401587 | orchestrator | ok: [testbed-manager] 2026-03-11 00:31:19.401598 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:31:19.401609 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:31:19.401620 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:31:19.401631 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:31:19.401642 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:31:19.401652 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:31:19.401663 | orchestrator | 2026-03-11 00:31:19.401674 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2026-03-11 00:31:19.401685 | orchestrator | Wednesday 11 March 2026 00:31:15 +0000 (0:00:00.856) 0:05:31.503 ******* 2026-03-11 00:31:19.401695 | orchestrator | ok: [testbed-manager] 2026-03-11 00:31:19.401706 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:31:19.401717 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:31:19.401727 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:31:19.401738 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:31:19.401756 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:31:19.401767 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:31:19.401778 | orchestrator | 2026-03-11 00:31:19.401788 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2026-03-11 00:31:19.401800 | orchestrator | Wednesday 11 March 2026 00:31:19 +0000 (0:00:03.224) 0:05:34.728 ******* 2026-03-11 00:31:19.401811 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2026-03-11 00:31:19.401823 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2026-03-11 00:31:19.401834 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2026-03-11 00:31:19.401845 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2026-03-11 00:31:19.401855 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2026-03-11 00:31:19.401866 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2026-03-11 00:31:19.401877 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:31:19.401888 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2026-03-11 00:31:19.401899 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2026-03-11 00:31:19.401909 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2026-03-11 00:31:19.401920 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:31:19.401931 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2026-03-11 00:31:19.401942 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:31:19.401952 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2026-03-11 00:31:19.401963 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2026-03-11 00:31:19.401974 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2026-03-11 00:31:19.401994 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2026-03-11 00:32:28.518195 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2026-03-11 00:32:28.518439 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:28.518463 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2026-03-11 00:32:28.518476 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2026-03-11 00:32:28.518487 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2026-03-11 00:32:28.518499 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:28.518510 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:28.518525 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2026-03-11 00:32:28.518544 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2026-03-11 00:32:28.518563 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2026-03-11 00:32:28.518581 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:28.518597 | orchestrator | 2026-03-11 00:32:28.518609 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2026-03-11 00:32:28.518621 | orchestrator | Wednesday 11 March 2026 00:31:19 +0000 (0:00:00.590) 0:05:35.319 ******* 2026-03-11 00:32:28.518633 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:28.518644 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:28.518655 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:28.518665 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:28.518676 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:28.518687 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:28.518698 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:28.518708 | orchestrator | 2026-03-11 00:32:28.518719 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2026-03-11 00:32:28.518730 | orchestrator | Wednesday 11 March 2026 00:31:28 +0000 (0:00:09.034) 0:05:44.353 ******* 2026-03-11 00:32:28.518741 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:28.518752 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:28.518763 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:28.518778 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:28.518797 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:28.518813 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:28.518865 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:28.518899 | orchestrator | 2026-03-11 00:32:28.518919 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2026-03-11 00:32:28.518939 | orchestrator | Wednesday 11 March 2026 00:31:29 +0000 (0:00:01.053) 0:05:45.406 ******* 2026-03-11 00:32:28.518957 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:28.518971 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:28.518982 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:28.518992 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:28.519003 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:28.519014 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:28.519024 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:28.519035 | orchestrator | 2026-03-11 00:32:28.519046 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2026-03-11 00:32:28.519057 | orchestrator | Wednesday 11 March 2026 00:31:39 +0000 (0:00:09.601) 0:05:55.008 ******* 2026-03-11 00:32:28.519068 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:28.519079 | orchestrator | changed: [testbed-manager] 2026-03-11 00:32:28.519104 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:28.519116 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:28.519126 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:28.519137 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:28.519148 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:28.519159 | orchestrator | 2026-03-11 00:32:28.519169 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2026-03-11 00:32:28.519180 | orchestrator | Wednesday 11 March 2026 00:31:43 +0000 (0:00:03.821) 0:05:58.830 ******* 2026-03-11 00:32:28.519191 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:28.519207 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:28.519225 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:28.519245 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:28.519263 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:28.519282 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:28.519324 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:28.519336 | orchestrator | 2026-03-11 00:32:28.519346 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2026-03-11 00:32:28.519358 | orchestrator | Wednesday 11 March 2026 00:31:44 +0000 (0:00:01.346) 0:06:00.177 ******* 2026-03-11 00:32:28.519368 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:28.519379 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:28.519390 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:28.519401 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:28.519411 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:28.519422 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:28.519436 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:28.519453 | orchestrator | 2026-03-11 00:32:28.519472 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2026-03-11 00:32:28.519491 | orchestrator | Wednesday 11 March 2026 00:31:46 +0000 (0:00:01.794) 0:06:01.972 ******* 2026-03-11 00:32:28.519509 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:28.519524 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:28.519535 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:28.519546 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:28.519557 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:28.519567 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:28.519578 | orchestrator | changed: [testbed-manager] 2026-03-11 00:32:28.519588 | orchestrator | 2026-03-11 00:32:28.519599 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2026-03-11 00:32:28.519610 | orchestrator | Wednesday 11 March 2026 00:31:47 +0000 (0:00:00.805) 0:06:02.777 ******* 2026-03-11 00:32:28.519620 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:28.519631 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:28.519642 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:28.519664 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:28.519674 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:28.519685 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:28.519695 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:28.519706 | orchestrator | 2026-03-11 00:32:28.519717 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2026-03-11 00:32:28.519750 | orchestrator | Wednesday 11 March 2026 00:31:58 +0000 (0:00:11.196) 0:06:13.973 ******* 2026-03-11 00:32:28.519762 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:28.519772 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:28.519783 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:28.519793 | orchestrator | changed: [testbed-manager] 2026-03-11 00:32:28.519804 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:28.519815 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:28.519825 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:28.519836 | orchestrator | 2026-03-11 00:32:28.519848 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2026-03-11 00:32:28.519866 | orchestrator | Wednesday 11 March 2026 00:31:59 +0000 (0:00:00.894) 0:06:14.868 ******* 2026-03-11 00:32:28.519891 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:28.519915 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:28.519931 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:28.519948 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:28.519965 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:28.519982 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:28.519999 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:28.520017 | orchestrator | 2026-03-11 00:32:28.520037 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2026-03-11 00:32:28.520055 | orchestrator | Wednesday 11 March 2026 00:32:09 +0000 (0:00:09.969) 0:06:24.837 ******* 2026-03-11 00:32:28.520073 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:28.520084 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:28.520095 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:28.520106 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:28.520116 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:28.520127 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:28.520138 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:28.520148 | orchestrator | 2026-03-11 00:32:28.520159 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2026-03-11 00:32:28.520170 | orchestrator | Wednesday 11 March 2026 00:32:21 +0000 (0:00:12.281) 0:06:37.119 ******* 2026-03-11 00:32:28.520181 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2026-03-11 00:32:28.520192 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2026-03-11 00:32:28.520203 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2026-03-11 00:32:28.520214 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2026-03-11 00:32:28.520225 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2026-03-11 00:32:28.520236 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2026-03-11 00:32:28.520247 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2026-03-11 00:32:28.520258 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2026-03-11 00:32:28.520268 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2026-03-11 00:32:28.520279 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2026-03-11 00:32:28.520331 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2026-03-11 00:32:28.520350 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2026-03-11 00:32:28.520369 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2026-03-11 00:32:28.520388 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2026-03-11 00:32:28.520405 | orchestrator | 2026-03-11 00:32:28.520421 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2026-03-11 00:32:28.520432 | orchestrator | Wednesday 11 March 2026 00:32:22 +0000 (0:00:01.217) 0:06:38.336 ******* 2026-03-11 00:32:28.520456 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:28.520467 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:28.520478 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:28.520489 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:28.520500 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:28.520511 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:28.520521 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:28.520532 | orchestrator | 2026-03-11 00:32:28.520543 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2026-03-11 00:32:28.520554 | orchestrator | Wednesday 11 March 2026 00:32:23 +0000 (0:00:00.525) 0:06:38.862 ******* 2026-03-11 00:32:28.520565 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:28.520575 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:28.520586 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:28.520597 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:28.520608 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:28.520619 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:28.520629 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:28.520640 | orchestrator | 2026-03-11 00:32:28.520651 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2026-03-11 00:32:28.520664 | orchestrator | Wednesday 11 March 2026 00:32:27 +0000 (0:00:04.361) 0:06:43.224 ******* 2026-03-11 00:32:28.520675 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:28.520685 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:28.520696 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:28.520707 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:28.520717 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:28.520728 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:28.520739 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:28.520749 | orchestrator | 2026-03-11 00:32:28.520761 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2026-03-11 00:32:28.520773 | orchestrator | Wednesday 11 March 2026 00:32:28 +0000 (0:00:00.705) 0:06:43.929 ******* 2026-03-11 00:32:28.520783 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2026-03-11 00:32:28.520794 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2026-03-11 00:32:28.520805 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:28.520816 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2026-03-11 00:32:28.520827 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2026-03-11 00:32:28.520838 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:28.520848 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2026-03-11 00:32:28.520859 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2026-03-11 00:32:28.520870 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:28.520893 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2026-03-11 00:32:47.650705 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2026-03-11 00:32:47.650828 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:47.650874 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2026-03-11 00:32:47.650908 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2026-03-11 00:32:47.650992 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:47.651008 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2026-03-11 00:32:47.651019 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2026-03-11 00:32:47.651030 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:47.651041 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2026-03-11 00:32:47.651052 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2026-03-11 00:32:47.651064 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:47.651075 | orchestrator | 2026-03-11 00:32:47.651087 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2026-03-11 00:32:47.651122 | orchestrator | Wednesday 11 March 2026 00:32:28 +0000 (0:00:00.577) 0:06:44.507 ******* 2026-03-11 00:32:47.651134 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:47.651145 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:47.651155 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:47.651166 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:47.651177 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:47.651187 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:47.651198 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:47.651208 | orchestrator | 2026-03-11 00:32:47.651219 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2026-03-11 00:32:47.651230 | orchestrator | Wednesday 11 March 2026 00:32:29 +0000 (0:00:00.487) 0:06:44.995 ******* 2026-03-11 00:32:47.651300 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:47.651314 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:47.651326 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:47.651338 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:47.651350 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:47.651362 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:47.651374 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:47.651387 | orchestrator | 2026-03-11 00:32:47.651400 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2026-03-11 00:32:47.651412 | orchestrator | Wednesday 11 March 2026 00:32:29 +0000 (0:00:00.504) 0:06:45.499 ******* 2026-03-11 00:32:47.651425 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:32:47.651437 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:32:47.651449 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:32:47.651462 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:47.651475 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:32:47.651488 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:32:47.651500 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:32:47.651512 | orchestrator | 2026-03-11 00:32:47.651524 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2026-03-11 00:32:47.651544 | orchestrator | Wednesday 11 March 2026 00:32:30 +0000 (0:00:00.493) 0:06:45.992 ******* 2026-03-11 00:32:47.651557 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:47.651570 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:47.651582 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:47.651594 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:47.651607 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:47.651619 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:47.651632 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:47.651644 | orchestrator | 2026-03-11 00:32:47.651654 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2026-03-11 00:32:47.651665 | orchestrator | Wednesday 11 March 2026 00:32:32 +0000 (0:00:01.949) 0:06:47.942 ******* 2026-03-11 00:32:47.651677 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:32:47.651691 | orchestrator | 2026-03-11 00:32:47.651702 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2026-03-11 00:32:47.651713 | orchestrator | Wednesday 11 March 2026 00:32:33 +0000 (0:00:00.812) 0:06:48.754 ******* 2026-03-11 00:32:47.651723 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:47.651734 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:47.651745 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:47.651755 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:47.651766 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:47.651777 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:47.651790 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:47.651810 | orchestrator | 2026-03-11 00:32:47.651828 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2026-03-11 00:32:47.651859 | orchestrator | Wednesday 11 March 2026 00:32:33 +0000 (0:00:00.809) 0:06:49.564 ******* 2026-03-11 00:32:47.651878 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:47.651897 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:47.651916 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:47.651935 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:47.651954 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:47.651968 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:47.651978 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:47.651989 | orchestrator | 2026-03-11 00:32:47.652000 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2026-03-11 00:32:47.652011 | orchestrator | Wednesday 11 March 2026 00:32:34 +0000 (0:00:01.060) 0:06:50.624 ******* 2026-03-11 00:32:47.652021 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:47.652032 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:47.652042 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:47.652053 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:47.652063 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:47.652074 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:47.652084 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:47.652095 | orchestrator | 2026-03-11 00:32:47.652106 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2026-03-11 00:32:47.652137 | orchestrator | Wednesday 11 March 2026 00:32:36 +0000 (0:00:01.426) 0:06:52.050 ******* 2026-03-11 00:32:47.652148 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:32:47.652159 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:47.652169 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:47.652180 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:47.652191 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:47.652201 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:47.652212 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:47.652223 | orchestrator | 2026-03-11 00:32:47.652288 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2026-03-11 00:32:47.652303 | orchestrator | Wednesday 11 March 2026 00:32:37 +0000 (0:00:01.394) 0:06:53.445 ******* 2026-03-11 00:32:47.652314 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:47.652325 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:47.652336 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:47.652347 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:47.652358 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:47.652369 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:47.652379 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:47.652390 | orchestrator | 2026-03-11 00:32:47.652401 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2026-03-11 00:32:47.652411 | orchestrator | Wednesday 11 March 2026 00:32:39 +0000 (0:00:01.391) 0:06:54.837 ******* 2026-03-11 00:32:47.652422 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:32:47.652433 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:32:47.652443 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:32:47.652454 | orchestrator | changed: [testbed-manager] 2026-03-11 00:32:47.652465 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:32:47.652475 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:32:47.652486 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:32:47.652496 | orchestrator | 2026-03-11 00:32:47.652507 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2026-03-11 00:32:47.652518 | orchestrator | Wednesday 11 March 2026 00:32:40 +0000 (0:00:01.383) 0:06:56.221 ******* 2026-03-11 00:32:47.652529 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:32:47.652540 | orchestrator | 2026-03-11 00:32:47.652551 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2026-03-11 00:32:47.652562 | orchestrator | Wednesday 11 March 2026 00:32:41 +0000 (0:00:01.045) 0:06:57.266 ******* 2026-03-11 00:32:47.652588 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:47.652599 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:47.652610 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:47.652620 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:47.652631 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:47.652642 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:47.652652 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:47.652663 | orchestrator | 2026-03-11 00:32:47.652674 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2026-03-11 00:32:47.652685 | orchestrator | Wednesday 11 March 2026 00:32:42 +0000 (0:00:01.318) 0:06:58.585 ******* 2026-03-11 00:32:47.652696 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:47.652707 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:47.652717 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:47.652728 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:47.652738 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:47.652749 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:47.652759 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:47.652770 | orchestrator | 2026-03-11 00:32:47.652781 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2026-03-11 00:32:47.652797 | orchestrator | Wednesday 11 March 2026 00:32:44 +0000 (0:00:01.121) 0:06:59.707 ******* 2026-03-11 00:32:47.652816 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:47.652833 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:47.652851 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:47.652871 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:47.652890 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:47.652908 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:47.652920 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:47.652930 | orchestrator | 2026-03-11 00:32:47.652941 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2026-03-11 00:32:47.652952 | orchestrator | Wednesday 11 March 2026 00:32:45 +0000 (0:00:01.227) 0:07:00.934 ******* 2026-03-11 00:32:47.652963 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:32:47.652973 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:32:47.652984 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:32:47.652994 | orchestrator | ok: [testbed-manager] 2026-03-11 00:32:47.653005 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:32:47.653015 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:32:47.653025 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:32:47.653036 | orchestrator | 2026-03-11 00:32:47.653047 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2026-03-11 00:32:47.653057 | orchestrator | Wednesday 11 March 2026 00:32:46 +0000 (0:00:01.406) 0:07:02.340 ******* 2026-03-11 00:32:47.653068 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:32:47.653079 | orchestrator | 2026-03-11 00:32:47.653089 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:32:47.653100 | orchestrator | Wednesday 11 March 2026 00:32:47 +0000 (0:00:00.871) 0:07:03.212 ******* 2026-03-11 00:32:47.653111 | orchestrator | 2026-03-11 00:32:47.653121 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:32:47.653132 | orchestrator | Wednesday 11 March 2026 00:32:47 +0000 (0:00:00.042) 0:07:03.255 ******* 2026-03-11 00:32:47.653143 | orchestrator | 2026-03-11 00:32:47.653153 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:32:47.653164 | orchestrator | Wednesday 11 March 2026 00:32:47 +0000 (0:00:00.039) 0:07:03.294 ******* 2026-03-11 00:32:47.653175 | orchestrator | 2026-03-11 00:32:47.653185 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:32:47.653205 | orchestrator | Wednesday 11 March 2026 00:32:47 +0000 (0:00:00.046) 0:07:03.341 ******* 2026-03-11 00:33:15.779442 | orchestrator | 2026-03-11 00:33:15.779563 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:33:15.779626 | orchestrator | Wednesday 11 March 2026 00:32:47 +0000 (0:00:00.043) 0:07:03.385 ******* 2026-03-11 00:33:15.779649 | orchestrator | 2026-03-11 00:33:15.779670 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:33:15.779688 | orchestrator | Wednesday 11 March 2026 00:32:47 +0000 (0:00:00.037) 0:07:03.422 ******* 2026-03-11 00:33:15.779708 | orchestrator | 2026-03-11 00:33:15.779726 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2026-03-11 00:33:15.779746 | orchestrator | Wednesday 11 March 2026 00:32:47 +0000 (0:00:00.037) 0:07:03.460 ******* 2026-03-11 00:33:15.779764 | orchestrator | 2026-03-11 00:33:15.779783 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2026-03-11 00:33:15.779801 | orchestrator | Wednesday 11 March 2026 00:32:47 +0000 (0:00:00.042) 0:07:03.503 ******* 2026-03-11 00:33:15.779820 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:15.779840 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:15.779858 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:15.779877 | orchestrator | 2026-03-11 00:33:15.779894 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2026-03-11 00:33:15.779913 | orchestrator | Wednesday 11 March 2026 00:32:49 +0000 (0:00:01.411) 0:07:04.914 ******* 2026-03-11 00:33:15.779933 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:15.779954 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:15.779973 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:15.779992 | orchestrator | changed: [testbed-manager] 2026-03-11 00:33:15.780013 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:15.780033 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:15.780053 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:15.780073 | orchestrator | 2026-03-11 00:33:15.780090 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart logrotate service] *********** 2026-03-11 00:33:15.780109 | orchestrator | Wednesday 11 March 2026 00:32:50 +0000 (0:00:01.611) 0:07:06.526 ******* 2026-03-11 00:33:15.780129 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:15.780148 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:15.780199 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:15.780220 | orchestrator | changed: [testbed-manager] 2026-03-11 00:33:15.780239 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:15.780258 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:15.780278 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:15.780297 | orchestrator | 2026-03-11 00:33:15.780318 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2026-03-11 00:33:15.780336 | orchestrator | Wednesday 11 March 2026 00:32:52 +0000 (0:00:01.250) 0:07:07.776 ******* 2026-03-11 00:33:15.780352 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:15.780365 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:15.780384 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:15.780402 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:15.780420 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:15.780438 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:15.780456 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:15.780474 | orchestrator | 2026-03-11 00:33:15.780512 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2026-03-11 00:33:15.780533 | orchestrator | Wednesday 11 March 2026 00:32:54 +0000 (0:00:02.636) 0:07:10.413 ******* 2026-03-11 00:33:15.780552 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:15.780571 | orchestrator | 2026-03-11 00:33:15.780591 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2026-03-11 00:33:15.780610 | orchestrator | Wednesday 11 March 2026 00:32:54 +0000 (0:00:00.098) 0:07:10.512 ******* 2026-03-11 00:33:15.780629 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:15.780648 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:15.780666 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:15.780682 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:15.780705 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:15.780716 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:15.780727 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:15.780737 | orchestrator | 2026-03-11 00:33:15.780748 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2026-03-11 00:33:15.780760 | orchestrator | Wednesday 11 March 2026 00:32:55 +0000 (0:00:01.143) 0:07:11.655 ******* 2026-03-11 00:33:15.780771 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:15.780781 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:15.780792 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:15.780803 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:15.780813 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:15.780828 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:15.780846 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:15.780866 | orchestrator | 2026-03-11 00:33:15.780884 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2026-03-11 00:33:15.780900 | orchestrator | Wednesday 11 March 2026 00:32:56 +0000 (0:00:00.770) 0:07:12.426 ******* 2026-03-11 00:33:15.780912 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:33:15.780926 | orchestrator | 2026-03-11 00:33:15.780937 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2026-03-11 00:33:15.780948 | orchestrator | Wednesday 11 March 2026 00:32:57 +0000 (0:00:00.942) 0:07:13.368 ******* 2026-03-11 00:33:15.780959 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:15.780971 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:15.780988 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:15.781005 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:15.781024 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:15.781042 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:15.781057 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:15.781068 | orchestrator | 2026-03-11 00:33:15.781079 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2026-03-11 00:33:15.781090 | orchestrator | Wednesday 11 March 2026 00:32:58 +0000 (0:00:00.846) 0:07:14.215 ******* 2026-03-11 00:33:15.781108 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2026-03-11 00:33:15.781147 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2026-03-11 00:33:15.781213 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2026-03-11 00:33:15.781233 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2026-03-11 00:33:15.781252 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2026-03-11 00:33:15.781270 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2026-03-11 00:33:15.781288 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2026-03-11 00:33:15.781305 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2026-03-11 00:33:15.781324 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2026-03-11 00:33:15.781342 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2026-03-11 00:33:15.781361 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2026-03-11 00:33:15.781380 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2026-03-11 00:33:15.781398 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2026-03-11 00:33:15.781411 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2026-03-11 00:33:15.781422 | orchestrator | 2026-03-11 00:33:15.781433 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2026-03-11 00:33:15.781444 | orchestrator | Wednesday 11 March 2026 00:33:01 +0000 (0:00:02.764) 0:07:16.979 ******* 2026-03-11 00:33:15.781455 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:15.781465 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:15.781476 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:15.781498 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:15.781509 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:15.781542 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:15.781553 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:15.781564 | orchestrator | 2026-03-11 00:33:15.781575 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2026-03-11 00:33:15.781586 | orchestrator | Wednesday 11 March 2026 00:33:01 +0000 (0:00:00.485) 0:07:17.464 ******* 2026-03-11 00:33:15.781599 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:33:15.781612 | orchestrator | 2026-03-11 00:33:15.781623 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2026-03-11 00:33:15.781634 | orchestrator | Wednesday 11 March 2026 00:33:02 +0000 (0:00:00.774) 0:07:18.239 ******* 2026-03-11 00:33:15.781645 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:15.781655 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:15.781666 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:15.781677 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:15.781688 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:15.781698 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:15.781709 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:15.781719 | orchestrator | 2026-03-11 00:33:15.781737 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2026-03-11 00:33:15.781749 | orchestrator | Wednesday 11 March 2026 00:33:03 +0000 (0:00:00.844) 0:07:19.083 ******* 2026-03-11 00:33:15.781759 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:15.781770 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:15.781781 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:15.781791 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:15.781802 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:15.781812 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:15.781823 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:15.781833 | orchestrator | 2026-03-11 00:33:15.781844 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2026-03-11 00:33:15.781855 | orchestrator | Wednesday 11 March 2026 00:33:04 +0000 (0:00:01.029) 0:07:20.112 ******* 2026-03-11 00:33:15.781866 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:15.781877 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:15.781887 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:15.781906 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:15.781924 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:15.781944 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:15.781963 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:15.781981 | orchestrator | 2026-03-11 00:33:15.781993 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2026-03-11 00:33:15.782004 | orchestrator | Wednesday 11 March 2026 00:33:04 +0000 (0:00:00.504) 0:07:20.617 ******* 2026-03-11 00:33:15.782100 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:15.782116 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:15.782134 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:15.782151 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:15.782274 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:15.782288 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:15.782299 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:15.782310 | orchestrator | 2026-03-11 00:33:15.782321 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2026-03-11 00:33:15.782332 | orchestrator | Wednesday 11 March 2026 00:33:06 +0000 (0:00:01.679) 0:07:22.296 ******* 2026-03-11 00:33:15.782343 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:15.782354 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:15.782365 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:15.782375 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:15.782386 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:15.782408 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:15.782419 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:15.782430 | orchestrator | 2026-03-11 00:33:15.782441 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2026-03-11 00:33:15.782451 | orchestrator | Wednesday 11 March 2026 00:33:07 +0000 (0:00:00.461) 0:07:22.757 ******* 2026-03-11 00:33:15.782470 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:15.782489 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:15.782508 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:15.782526 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:15.782544 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:15.782555 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:15.782582 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:49.557079 | orchestrator | 2026-03-11 00:33:49.557224 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2026-03-11 00:33:49.557243 | orchestrator | Wednesday 11 March 2026 00:33:15 +0000 (0:00:08.775) 0:07:31.533 ******* 2026-03-11 00:33:49.557256 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:49.557269 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:49.557281 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:49.557292 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:49.557303 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:49.557314 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:49.557325 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:49.557336 | orchestrator | 2026-03-11 00:33:49.557347 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2026-03-11 00:33:49.557358 | orchestrator | Wednesday 11 March 2026 00:33:17 +0000 (0:00:01.554) 0:07:33.087 ******* 2026-03-11 00:33:49.557369 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:49.557380 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:49.557391 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:49.557401 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:49.557412 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:49.557423 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:49.557434 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:49.557445 | orchestrator | 2026-03-11 00:33:49.557456 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2026-03-11 00:33:49.557467 | orchestrator | Wednesday 11 March 2026 00:33:19 +0000 (0:00:01.722) 0:07:34.810 ******* 2026-03-11 00:33:49.557478 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:49.557489 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:49.557499 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:49.557510 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:49.557521 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:49.557532 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:49.557542 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:49.557553 | orchestrator | 2026-03-11 00:33:49.557564 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-11 00:33:49.557575 | orchestrator | Wednesday 11 March 2026 00:33:20 +0000 (0:00:01.632) 0:07:36.443 ******* 2026-03-11 00:33:49.557586 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:49.557596 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:49.557607 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:49.557620 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:49.557632 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:49.557645 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:49.557657 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:49.557669 | orchestrator | 2026-03-11 00:33:49.557682 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-11 00:33:49.557695 | orchestrator | Wednesday 11 March 2026 00:33:21 +0000 (0:00:01.113) 0:07:37.557 ******* 2026-03-11 00:33:49.557707 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:49.557720 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:49.557733 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:49.557770 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:49.557783 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:49.557795 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:49.557808 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:49.557821 | orchestrator | 2026-03-11 00:33:49.557834 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2026-03-11 00:33:49.557846 | orchestrator | Wednesday 11 March 2026 00:33:22 +0000 (0:00:00.807) 0:07:38.364 ******* 2026-03-11 00:33:49.557859 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:49.557871 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:49.557884 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:49.557896 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:49.557909 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:49.557921 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:49.557933 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:49.557946 | orchestrator | 2026-03-11 00:33:49.557959 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2026-03-11 00:33:49.557970 | orchestrator | Wednesday 11 March 2026 00:33:23 +0000 (0:00:00.485) 0:07:38.849 ******* 2026-03-11 00:33:49.557981 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:49.558011 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:49.558095 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:49.558185 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:49.558198 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:49.558209 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:49.558220 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:49.558231 | orchestrator | 2026-03-11 00:33:49.558242 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2026-03-11 00:33:49.558253 | orchestrator | Wednesday 11 March 2026 00:33:23 +0000 (0:00:00.508) 0:07:39.358 ******* 2026-03-11 00:33:49.558264 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:49.558275 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:49.558286 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:49.558297 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:49.558308 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:49.558318 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:49.558329 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:49.558339 | orchestrator | 2026-03-11 00:33:49.558351 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2026-03-11 00:33:49.558369 | orchestrator | Wednesday 11 March 2026 00:33:24 +0000 (0:00:00.722) 0:07:40.081 ******* 2026-03-11 00:33:49.558386 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:49.558403 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:49.558421 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:49.558440 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:49.558457 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:49.558475 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:49.558492 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:49.558510 | orchestrator | 2026-03-11 00:33:49.558521 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2026-03-11 00:33:49.558532 | orchestrator | Wednesday 11 March 2026 00:33:24 +0000 (0:00:00.536) 0:07:40.617 ******* 2026-03-11 00:33:49.558543 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:49.558554 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:49.558564 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:49.558575 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:49.558585 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:49.558596 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:49.558606 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:49.558617 | orchestrator | 2026-03-11 00:33:49.558650 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2026-03-11 00:33:49.558662 | orchestrator | Wednesday 11 March 2026 00:33:30 +0000 (0:00:05.510) 0:07:46.128 ******* 2026-03-11 00:33:49.558673 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:33:49.558684 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:33:49.558708 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:33:49.558719 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:33:49.558730 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:33:49.558740 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:33:49.558751 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:33:49.558762 | orchestrator | 2026-03-11 00:33:49.558773 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2026-03-11 00:33:49.558783 | orchestrator | Wednesday 11 March 2026 00:33:30 +0000 (0:00:00.525) 0:07:46.653 ******* 2026-03-11 00:33:49.558797 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:33:49.558810 | orchestrator | 2026-03-11 00:33:49.558821 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2026-03-11 00:33:49.558832 | orchestrator | Wednesday 11 March 2026 00:33:31 +0000 (0:00:01.013) 0:07:47.666 ******* 2026-03-11 00:33:49.558860 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:49.558871 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:49.558882 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:49.558893 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:49.558904 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:49.558914 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:49.558925 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:49.558935 | orchestrator | 2026-03-11 00:33:49.558946 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2026-03-11 00:33:49.558957 | orchestrator | Wednesday 11 March 2026 00:33:34 +0000 (0:00:02.038) 0:07:49.705 ******* 2026-03-11 00:33:49.558967 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:49.558978 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:49.558988 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:49.558999 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:49.559009 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:49.559020 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:49.559030 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:49.559041 | orchestrator | 2026-03-11 00:33:49.559052 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2026-03-11 00:33:49.559063 | orchestrator | Wednesday 11 March 2026 00:33:35 +0000 (0:00:01.388) 0:07:51.093 ******* 2026-03-11 00:33:49.559073 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:33:49.559084 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:33:49.559095 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:33:49.559132 | orchestrator | ok: [testbed-manager] 2026-03-11 00:33:49.559152 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:33:49.559171 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:33:49.559190 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:33:49.559207 | orchestrator | 2026-03-11 00:33:49.559223 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2026-03-11 00:33:49.559241 | orchestrator | Wednesday 11 March 2026 00:33:36 +0000 (0:00:00.864) 0:07:51.958 ******* 2026-03-11 00:33:49.559252 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:33:49.559265 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:33:49.559276 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:33:49.559287 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:33:49.559298 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:33:49.559309 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:33:49.559327 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2026-03-11 00:33:49.559338 | orchestrator | 2026-03-11 00:33:49.559349 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2026-03-11 00:33:49.559359 | orchestrator | Wednesday 11 March 2026 00:33:38 +0000 (0:00:01.922) 0:07:53.880 ******* 2026-03-11 00:33:49.559371 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:33:49.559382 | orchestrator | 2026-03-11 00:33:49.559392 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2026-03-11 00:33:49.559403 | orchestrator | Wednesday 11 March 2026 00:33:38 +0000 (0:00:00.813) 0:07:54.694 ******* 2026-03-11 00:33:49.559413 | orchestrator | changed: [testbed-manager] 2026-03-11 00:33:49.559424 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:33:49.559435 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:33:49.559446 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:33:49.559456 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:33:49.559467 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:33:49.559478 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:33:49.559488 | orchestrator | 2026-03-11 00:33:49.559508 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2026-03-11 00:34:20.993388 | orchestrator | Wednesday 11 March 2026 00:33:49 +0000 (0:00:10.555) 0:08:05.249 ******* 2026-03-11 00:34:20.993520 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:20.993547 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:20.993568 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:20.993588 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:20.993608 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:20.993627 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:20.993648 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:20.993668 | orchestrator | 2026-03-11 00:34:20.993689 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2026-03-11 00:34:20.993709 | orchestrator | Wednesday 11 March 2026 00:33:51 +0000 (0:00:01.742) 0:08:06.992 ******* 2026-03-11 00:34:20.993730 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:20.993750 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:20.993771 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:20.993790 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:20.993811 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:20.993830 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:20.993851 | orchestrator | 2026-03-11 00:34:20.993870 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2026-03-11 00:34:20.993892 | orchestrator | Wednesday 11 March 2026 00:33:52 +0000 (0:00:01.361) 0:08:08.353 ******* 2026-03-11 00:34:20.993915 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:20.993938 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:20.993959 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:20.993980 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:20.994002 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:20.994179 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:20.994203 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:20.994224 | orchestrator | 2026-03-11 00:34:20.994244 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2026-03-11 00:34:20.994265 | orchestrator | 2026-03-11 00:34:20.994284 | orchestrator | TASK [Include hardening role] ************************************************** 2026-03-11 00:34:20.994303 | orchestrator | Wednesday 11 March 2026 00:33:53 +0000 (0:00:01.196) 0:08:09.550 ******* 2026-03-11 00:34:20.994322 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:34:20.994341 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:34:20.994394 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:34:20.994413 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:34:20.994432 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:34:20.994450 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:34:20.994468 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:34:20.994485 | orchestrator | 2026-03-11 00:34:20.994503 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2026-03-11 00:34:20.994522 | orchestrator | 2026-03-11 00:34:20.994540 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2026-03-11 00:34:20.994559 | orchestrator | Wednesday 11 March 2026 00:33:54 +0000 (0:00:00.560) 0:08:10.110 ******* 2026-03-11 00:34:20.994578 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:20.994597 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:20.994615 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:20.994634 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:20.994653 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:20.994690 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:20.994711 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:20.994729 | orchestrator | 2026-03-11 00:34:20.994748 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2026-03-11 00:34:20.994767 | orchestrator | Wednesday 11 March 2026 00:33:55 +0000 (0:00:01.267) 0:08:11.378 ******* 2026-03-11 00:34:20.994786 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:20.994804 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:20.994823 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:20.994842 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:20.994860 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:20.994880 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:20.994898 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:20.994917 | orchestrator | 2026-03-11 00:34:20.994936 | orchestrator | TASK [Include auditd role] ***************************************************** 2026-03-11 00:34:20.994955 | orchestrator | Wednesday 11 March 2026 00:33:56 +0000 (0:00:01.320) 0:08:12.698 ******* 2026-03-11 00:34:20.994974 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:34:20.994992 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:34:20.995011 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:34:20.995030 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:34:20.995049 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:34:20.995091 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:34:20.995110 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:34:20.995128 | orchestrator | 2026-03-11 00:34:20.995147 | orchestrator | TASK [Include smartd role] ***************************************************** 2026-03-11 00:34:20.995165 | orchestrator | Wednesday 11 March 2026 00:33:57 +0000 (0:00:00.557) 0:08:13.255 ******* 2026-03-11 00:34:20.995185 | orchestrator | included: osism.services.smartd for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:34:20.995204 | orchestrator | 2026-03-11 00:34:20.995222 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2026-03-11 00:34:20.995241 | orchestrator | Wednesday 11 March 2026 00:33:58 +0000 (0:00:00.730) 0:08:13.986 ******* 2026-03-11 00:34:20.995261 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:34:20.995283 | orchestrator | 2026-03-11 00:34:20.995302 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2026-03-11 00:34:20.995320 | orchestrator | Wednesday 11 March 2026 00:33:58 +0000 (0:00:00.677) 0:08:14.663 ******* 2026-03-11 00:34:20.995339 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:20.995357 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:20.995376 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:20.995394 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:20.995428 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:20.995447 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:20.995464 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:20.995482 | orchestrator | 2026-03-11 00:34:20.995522 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2026-03-11 00:34:20.995544 | orchestrator | Wednesday 11 March 2026 00:34:08 +0000 (0:00:10.000) 0:08:24.663 ******* 2026-03-11 00:34:20.995562 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:20.995581 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:20.995599 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:20.995617 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:20.995635 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:20.995654 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:20.995672 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:20.995690 | orchestrator | 2026-03-11 00:34:20.995709 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2026-03-11 00:34:20.995727 | orchestrator | Wednesday 11 March 2026 00:34:09 +0000 (0:00:00.843) 0:08:25.507 ******* 2026-03-11 00:34:20.995746 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:20.995764 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:20.995782 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:20.995801 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:20.995818 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:20.995837 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:20.995855 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:20.995873 | orchestrator | 2026-03-11 00:34:20.995892 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2026-03-11 00:34:20.995910 | orchestrator | Wednesday 11 March 2026 00:34:11 +0000 (0:00:01.372) 0:08:26.879 ******* 2026-03-11 00:34:20.995929 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:20.995947 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:20.995965 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:20.995983 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:20.996002 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:20.996019 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:20.996038 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:20.996079 | orchestrator | 2026-03-11 00:34:20.996098 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2026-03-11 00:34:20.996116 | orchestrator | Wednesday 11 March 2026 00:34:13 +0000 (0:00:02.692) 0:08:29.571 ******* 2026-03-11 00:34:20.996133 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:20.996152 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:20.996170 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:20.996187 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:20.996206 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:20.996225 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:20.996243 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:20.996261 | orchestrator | 2026-03-11 00:34:20.996280 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2026-03-11 00:34:20.996299 | orchestrator | Wednesday 11 March 2026 00:34:15 +0000 (0:00:01.189) 0:08:30.760 ******* 2026-03-11 00:34:20.996317 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:20.996335 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:20.996354 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:20.996372 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:20.996390 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:20.996417 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:20.996435 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:20.996453 | orchestrator | 2026-03-11 00:34:20.996472 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2026-03-11 00:34:20.996490 | orchestrator | 2026-03-11 00:34:20.996509 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2026-03-11 00:34:20.996527 | orchestrator | Wednesday 11 March 2026 00:34:16 +0000 (0:00:01.109) 0:08:31.870 ******* 2026-03-11 00:34:20.996557 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:34:20.996576 | orchestrator | 2026-03-11 00:34:20.996594 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-11 00:34:20.996613 | orchestrator | Wednesday 11 March 2026 00:34:17 +0000 (0:00:00.967) 0:08:32.838 ******* 2026-03-11 00:34:20.996630 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:20.996646 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:20.996662 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:20.996678 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:20.996694 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:20.996710 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:20.996726 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:20.996742 | orchestrator | 2026-03-11 00:34:20.996759 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-11 00:34:20.996775 | orchestrator | Wednesday 11 March 2026 00:34:18 +0000 (0:00:00.864) 0:08:33.702 ******* 2026-03-11 00:34:20.996792 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:20.996808 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:20.996825 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:20.996841 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:20.996857 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:20.996873 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:20.996890 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:20.996907 | orchestrator | 2026-03-11 00:34:20.996923 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2026-03-11 00:34:20.996939 | orchestrator | Wednesday 11 March 2026 00:34:19 +0000 (0:00:01.125) 0:08:34.827 ******* 2026-03-11 00:34:20.996955 | orchestrator | included: osism.commons.state for testbed-node-3, testbed-node-4, testbed-node-5, testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:34:20.996972 | orchestrator | 2026-03-11 00:34:20.996988 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2026-03-11 00:34:20.997005 | orchestrator | Wednesday 11 March 2026 00:34:20 +0000 (0:00:00.984) 0:08:35.812 ******* 2026-03-11 00:34:20.997021 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:20.997038 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:20.997075 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:20.997092 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:20.997109 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:20.997164 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:20.997183 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:20.997200 | orchestrator | 2026-03-11 00:34:20.997226 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2026-03-11 00:34:22.479035 | orchestrator | Wednesday 11 March 2026 00:34:20 +0000 (0:00:00.874) 0:08:36.686 ******* 2026-03-11 00:34:22.479175 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:22.479193 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:22.479206 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:22.479217 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:22.479228 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:22.479239 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:22.479250 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:22.479261 | orchestrator | 2026-03-11 00:34:22.479273 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:34:22.479285 | orchestrator | testbed-manager : ok=168  changed=40  unreachable=0 failed=0 skipped=42  rescued=0 ignored=0 2026-03-11 00:34:22.479298 | orchestrator | testbed-node-0 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-11 00:34:22.479309 | orchestrator | testbed-node-1 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-11 00:34:22.479348 | orchestrator | testbed-node-2 : ok=177  changed=69  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2026-03-11 00:34:22.479359 | orchestrator | testbed-node-3 : ok=175  changed=65  unreachable=0 failed=0 skipped=38  rescued=0 ignored=0 2026-03-11 00:34:22.479370 | orchestrator | testbed-node-4 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-11 00:34:22.479381 | orchestrator | testbed-node-5 : ok=175  changed=65  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2026-03-11 00:34:22.479392 | orchestrator | 2026-03-11 00:34:22.479402 | orchestrator | 2026-03-11 00:34:22.479413 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:34:22.479424 | orchestrator | Wednesday 11 March 2026 00:34:22 +0000 (0:00:01.117) 0:08:37.804 ******* 2026-03-11 00:34:22.479435 | orchestrator | =============================================================================== 2026-03-11 00:34:22.479446 | orchestrator | osism.commons.packages : Install required packages --------------------- 83.31s 2026-03-11 00:34:22.479456 | orchestrator | osism.commons.packages : Download required packages -------------------- 36.23s 2026-03-11 00:34:22.479467 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.85s 2026-03-11 00:34:22.479492 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.67s 2026-03-11 00:34:22.479504 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.28s 2026-03-11 00:34:22.479514 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.24s 2026-03-11 00:34:22.479526 | orchestrator | osism.services.docker : Install containerd package --------------------- 11.20s 2026-03-11 00:34:22.479538 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 10.88s 2026-03-11 00:34:22.479551 | orchestrator | osism.services.lldpd : Install lldpd package --------------------------- 10.55s 2026-03-11 00:34:22.479563 | orchestrator | osism.services.smartd : Install smartmontools package ------------------ 10.00s 2026-03-11 00:34:22.479575 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.97s 2026-03-11 00:34:22.479588 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 9.67s 2026-03-11 00:34:22.479600 | orchestrator | osism.services.docker : Add repository ---------------------------------- 9.60s 2026-03-11 00:34:22.479613 | orchestrator | osism.services.rng : Install rng package -------------------------------- 9.50s 2026-03-11 00:34:22.479625 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 9.26s 2026-03-11 00:34:22.479638 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 9.03s 2026-03-11 00:34:22.479650 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 8.78s 2026-03-11 00:34:22.479662 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 7.22s 2026-03-11 00:34:22.479674 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 7.13s 2026-03-11 00:34:22.479687 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.51s 2026-03-11 00:34:22.863623 | orchestrator | + osism apply fail2ban 2026-03-11 00:34:35.630422 | orchestrator | 2026-03-11 00:34:35 | INFO  | Prepare task for execution of fail2ban. 2026-03-11 00:34:35.715425 | orchestrator | 2026-03-11 00:34:35 | INFO  | Task de4f34a1-0d5c-4d95-bd87-463e613796e6 (fail2ban) was prepared for execution. 2026-03-11 00:34:35.715539 | orchestrator | 2026-03-11 00:34:35 | INFO  | It takes a moment until task de4f34a1-0d5c-4d95-bd87-463e613796e6 (fail2ban) has been started and output is visible here. 2026-03-11 00:34:58.013477 | orchestrator | 2026-03-11 00:34:58.013615 | orchestrator | PLAY [Apply role fail2ban] ***************************************************** 2026-03-11 00:34:58.013660 | orchestrator | 2026-03-11 00:34:58.013671 | orchestrator | TASK [osism.services.fail2ban : Include distribution specific install tasks] *** 2026-03-11 00:34:58.013682 | orchestrator | Wednesday 11 March 2026 00:34:40 +0000 (0:00:00.254) 0:00:00.254 ******* 2026-03-11 00:34:58.013710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/fail2ban/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:34:58.013724 | orchestrator | 2026-03-11 00:34:58.013745 | orchestrator | TASK [osism.services.fail2ban : Install fail2ban package] ********************** 2026-03-11 00:34:58.013755 | orchestrator | Wednesday 11 March 2026 00:34:41 +0000 (0:00:01.102) 0:00:01.357 ******* 2026-03-11 00:34:58.013766 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:58.013777 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:58.013787 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:58.013797 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:58.013806 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:58.013816 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:58.013825 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:58.013835 | orchestrator | 2026-03-11 00:34:58.013845 | orchestrator | TASK [osism.services.fail2ban : Copy configuration files] ********************** 2026-03-11 00:34:58.013854 | orchestrator | Wednesday 11 March 2026 00:34:52 +0000 (0:00:11.478) 0:00:12.836 ******* 2026-03-11 00:34:58.013864 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:58.013874 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:58.013884 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:58.013893 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:58.013903 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:58.013912 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:58.013922 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:58.013932 | orchestrator | 2026-03-11 00:34:58.013941 | orchestrator | TASK [osism.services.fail2ban : Manage fail2ban service] *********************** 2026-03-11 00:34:58.013952 | orchestrator | Wednesday 11 March 2026 00:34:54 +0000 (0:00:01.598) 0:00:14.435 ******* 2026-03-11 00:34:58.013962 | orchestrator | ok: [testbed-manager] 2026-03-11 00:34:58.014093 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:34:58.014107 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:34:58.014119 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:34:58.014131 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:34:58.014142 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:34:58.014153 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:34:58.014164 | orchestrator | 2026-03-11 00:34:58.014176 | orchestrator | TASK [osism.services.fail2ban : Reload fail2ban configuration] ***************** 2026-03-11 00:34:58.014187 | orchestrator | Wednesday 11 March 2026 00:34:55 +0000 (0:00:01.503) 0:00:15.938 ******* 2026-03-11 00:34:58.014199 | orchestrator | changed: [testbed-manager] 2026-03-11 00:34:58.014210 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:34:58.014222 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:34:58.014233 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:34:58.014244 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:34:58.014255 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:34:58.014266 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:34:58.014278 | orchestrator | 2026-03-11 00:34:58.014289 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:34:58.014317 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:34:58.014329 | orchestrator | testbed-node-0 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:34:58.014339 | orchestrator | testbed-node-1 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:34:58.014349 | orchestrator | testbed-node-2 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:34:58.014368 | orchestrator | testbed-node-3 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:34:58.014378 | orchestrator | testbed-node-4 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:34:58.014388 | orchestrator | testbed-node-5 : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:34:58.014397 | orchestrator | 2026-03-11 00:34:58.014407 | orchestrator | 2026-03-11 00:34:58.014417 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:34:58.014427 | orchestrator | Wednesday 11 March 2026 00:34:57 +0000 (0:00:01.780) 0:00:17.718 ******* 2026-03-11 00:34:58.014436 | orchestrator | =============================================================================== 2026-03-11 00:34:58.014446 | orchestrator | osism.services.fail2ban : Install fail2ban package --------------------- 11.48s 2026-03-11 00:34:58.014455 | orchestrator | osism.services.fail2ban : Reload fail2ban configuration ----------------- 1.78s 2026-03-11 00:34:58.014465 | orchestrator | osism.services.fail2ban : Copy configuration files ---------------------- 1.60s 2026-03-11 00:34:58.014475 | orchestrator | osism.services.fail2ban : Manage fail2ban service ----------------------- 1.50s 2026-03-11 00:34:58.014484 | orchestrator | osism.services.fail2ban : Include distribution specific install tasks --- 1.10s 2026-03-11 00:34:58.344347 | orchestrator | + [[ -e /etc/redhat-release ]] 2026-03-11 00:34:58.344479 | orchestrator | + osism apply network 2026-03-11 00:35:10.297224 | orchestrator | 2026-03-11 00:35:10 | INFO  | Prepare task for execution of network. 2026-03-11 00:35:10.368282 | orchestrator | 2026-03-11 00:35:10 | INFO  | Task 51584f2d-7632-4989-b9dc-ba30967d1cc8 (network) was prepared for execution. 2026-03-11 00:35:10.368354 | orchestrator | 2026-03-11 00:35:10 | INFO  | It takes a moment until task 51584f2d-7632-4989-b9dc-ba30967d1cc8 (network) has been started and output is visible here. 2026-03-11 00:35:37.817831 | orchestrator | 2026-03-11 00:35:37.817958 | orchestrator | PLAY [Apply role network] ****************************************************** 2026-03-11 00:35:37.817967 | orchestrator | 2026-03-11 00:35:37.817973 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2026-03-11 00:35:37.817978 | orchestrator | Wednesday 11 March 2026 00:35:14 +0000 (0:00:00.188) 0:00:00.189 ******* 2026-03-11 00:35:37.817983 | orchestrator | ok: [testbed-manager] 2026-03-11 00:35:37.817990 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:35:37.817995 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:35:37.817999 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:35:37.818003 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:35:37.818008 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:35:37.818046 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:35:37.818052 | orchestrator | 2026-03-11 00:35:37.818056 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2026-03-11 00:35:37.818061 | orchestrator | Wednesday 11 March 2026 00:35:15 +0000 (0:00:00.490) 0:00:00.679 ******* 2026-03-11 00:35:37.818067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:35:37.818074 | orchestrator | 2026-03-11 00:35:37.818079 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2026-03-11 00:35:37.818084 | orchestrator | Wednesday 11 March 2026 00:35:15 +0000 (0:00:00.855) 0:00:01.534 ******* 2026-03-11 00:35:37.818088 | orchestrator | ok: [testbed-manager] 2026-03-11 00:35:37.818093 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:35:37.818097 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:35:37.818102 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:35:37.818107 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:35:37.818128 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:35:37.818132 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:35:37.818137 | orchestrator | 2026-03-11 00:35:37.818142 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2026-03-11 00:35:37.818146 | orchestrator | Wednesday 11 March 2026 00:35:17 +0000 (0:00:02.002) 0:00:03.537 ******* 2026-03-11 00:35:37.818151 | orchestrator | ok: [testbed-manager] 2026-03-11 00:35:37.818156 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:35:37.818160 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:35:37.818164 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:35:37.818169 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:35:37.818174 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:35:37.818178 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:35:37.818182 | orchestrator | 2026-03-11 00:35:37.818187 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2026-03-11 00:35:37.818192 | orchestrator | Wednesday 11 March 2026 00:35:19 +0000 (0:00:01.639) 0:00:05.177 ******* 2026-03-11 00:35:37.818196 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2026-03-11 00:35:37.818202 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2026-03-11 00:35:37.818206 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2026-03-11 00:35:37.818211 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2026-03-11 00:35:37.818216 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2026-03-11 00:35:37.818220 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2026-03-11 00:35:37.818225 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2026-03-11 00:35:37.818229 | orchestrator | 2026-03-11 00:35:37.818234 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2026-03-11 00:35:37.818239 | orchestrator | Wednesday 11 March 2026 00:35:20 +0000 (0:00:00.925) 0:00:06.102 ******* 2026-03-11 00:35:37.818244 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 00:35:37.818249 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-11 00:35:37.818254 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-11 00:35:37.818258 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-11 00:35:37.818263 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-11 00:35:37.818267 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 00:35:37.818272 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-11 00:35:37.818277 | orchestrator | 2026-03-11 00:35:37.818281 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2026-03-11 00:35:37.818286 | orchestrator | Wednesday 11 March 2026 00:35:23 +0000 (0:00:03.224) 0:00:09.326 ******* 2026-03-11 00:35:37.818290 | orchestrator | changed: [testbed-manager] 2026-03-11 00:35:37.818295 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:37.818299 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:37.818304 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:37.818308 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:37.818312 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:37.818317 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:37.818321 | orchestrator | 2026-03-11 00:35:37.818326 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2026-03-11 00:35:37.818330 | orchestrator | Wednesday 11 March 2026 00:35:25 +0000 (0:00:01.589) 0:00:10.916 ******* 2026-03-11 00:35:37.818335 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 00:35:37.818339 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 00:35:37.818344 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-11 00:35:37.818348 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-11 00:35:37.818353 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-11 00:35:37.818357 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-11 00:35:37.818361 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-11 00:35:37.818366 | orchestrator | 2026-03-11 00:35:37.818370 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2026-03-11 00:35:37.818375 | orchestrator | Wednesday 11 March 2026 00:35:27 +0000 (0:00:01.937) 0:00:12.854 ******* 2026-03-11 00:35:37.818385 | orchestrator | ok: [testbed-manager] 2026-03-11 00:35:37.818389 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:35:37.818394 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:35:37.818398 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:35:37.818404 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:35:37.818409 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:35:37.818414 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:35:37.818419 | orchestrator | 2026-03-11 00:35:37.818424 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2026-03-11 00:35:37.818440 | orchestrator | Wednesday 11 March 2026 00:35:28 +0000 (0:00:01.180) 0:00:14.035 ******* 2026-03-11 00:35:37.818446 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:35:37.818451 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:35:37.818456 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:35:37.818461 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:35:37.818478 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:35:37.818483 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:35:37.818489 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:35:37.818494 | orchestrator | 2026-03-11 00:35:37.818499 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2026-03-11 00:35:37.818504 | orchestrator | Wednesday 11 March 2026 00:35:29 +0000 (0:00:00.644) 0:00:14.680 ******* 2026-03-11 00:35:37.818509 | orchestrator | ok: [testbed-manager] 2026-03-11 00:35:37.818515 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:35:37.818520 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:35:37.818525 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:35:37.818530 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:35:37.818535 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:35:37.818540 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:35:37.818545 | orchestrator | 2026-03-11 00:35:37.818551 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2026-03-11 00:35:37.818556 | orchestrator | Wednesday 11 March 2026 00:35:31 +0000 (0:00:02.294) 0:00:16.974 ******* 2026-03-11 00:35:37.818561 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:35:37.818566 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:35:37.818571 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:35:37.818577 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:35:37.818582 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:35:37.818587 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:35:37.818593 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2026-03-11 00:35:37.818599 | orchestrator | 2026-03-11 00:35:37.818604 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2026-03-11 00:35:37.818610 | orchestrator | Wednesday 11 March 2026 00:35:32 +0000 (0:00:00.755) 0:00:17.729 ******* 2026-03-11 00:35:37.818615 | orchestrator | ok: [testbed-manager] 2026-03-11 00:35:37.818620 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:35:37.818625 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:35:37.818630 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:35:37.818635 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:35:37.818640 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:35:37.818645 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:35:37.818651 | orchestrator | 2026-03-11 00:35:37.818656 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2026-03-11 00:35:37.818662 | orchestrator | Wednesday 11 March 2026 00:35:33 +0000 (0:00:01.537) 0:00:19.267 ******* 2026-03-11 00:35:37.818670 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:35:37.818677 | orchestrator | 2026-03-11 00:35:37.818682 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-11 00:35:37.818687 | orchestrator | Wednesday 11 March 2026 00:35:34 +0000 (0:00:01.236) 0:00:20.504 ******* 2026-03-11 00:35:37.818702 | orchestrator | ok: [testbed-manager] 2026-03-11 00:35:37.818708 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:35:37.818713 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:35:37.818718 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:35:37.818723 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:35:37.818728 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:35:37.818733 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:35:37.818739 | orchestrator | 2026-03-11 00:35:37.818744 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2026-03-11 00:35:37.818750 | orchestrator | Wednesday 11 March 2026 00:35:35 +0000 (0:00:01.124) 0:00:21.629 ******* 2026-03-11 00:35:37.818755 | orchestrator | ok: [testbed-manager] 2026-03-11 00:35:37.818760 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:35:37.818764 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:35:37.818769 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:35:37.818773 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:35:37.818778 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:35:37.818782 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:35:37.818786 | orchestrator | 2026-03-11 00:35:37.818791 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-11 00:35:37.818795 | orchestrator | Wednesday 11 March 2026 00:35:36 +0000 (0:00:00.657) 0:00:22.286 ******* 2026-03-11 00:35:37.818800 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:35:37.818805 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:35:37.818809 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:35:37.818813 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:35:37.818818 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:35:37.818823 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:35:37.818827 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:35:37.818831 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:35:37.818836 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:35:37.818840 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:35:37.818845 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:35:37.818849 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2026-03-11 00:35:37.818854 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:35:37.818858 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2026-03-11 00:35:37.818876 | orchestrator | 2026-03-11 00:35:37.818884 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2026-03-11 00:35:52.422103 | orchestrator | Wednesday 11 March 2026 00:35:37 +0000 (0:00:01.194) 0:00:23.481 ******* 2026-03-11 00:35:52.422220 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:35:52.422231 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:35:52.422239 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:35:52.422246 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:35:52.422252 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:35:52.422259 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:35:52.422265 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:35:52.422272 | orchestrator | 2026-03-11 00:35:52.422279 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2026-03-11 00:35:52.422285 | orchestrator | Wednesday 11 March 2026 00:35:38 +0000 (0:00:00.629) 0:00:24.110 ******* 2026-03-11 00:35:52.422293 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-0, testbed-node-2, testbed-manager, testbed-node-1, testbed-node-3, testbed-node-5, testbed-node-4 2026-03-11 00:35:52.422326 | orchestrator | 2026-03-11 00:35:52.422333 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2026-03-11 00:35:52.422340 | orchestrator | Wednesday 11 March 2026 00:35:42 +0000 (0:00:04.320) 0:00:28.431 ******* 2026-03-11 00:35:52.422347 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422357 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422386 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422392 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:35:52.422399 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422405 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422411 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:35:52.422429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:35:52.422435 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:35:52.422458 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:35:52.422465 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:35:52.422477 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:35:52.422483 | orchestrator | 2026-03-11 00:35:52.422490 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2026-03-11 00:35:52.422496 | orchestrator | Wednesday 11 March 2026 00:35:47 +0000 (0:00:04.841) 0:00:33.273 ******* 2026-03-11 00:35:52.422502 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422531 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422538 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:35:52.422544 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422550 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2026-03-11 00:35:52.422557 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:35:52.422563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:35:52.422569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:35:52.422575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:35:52.422592 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:04.678903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2026-03-11 00:36:04.679032 | orchestrator | 2026-03-11 00:36:04.679052 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2026-03-11 00:36:04.679066 | orchestrator | Wednesday 11 March 2026 00:35:52 +0000 (0:00:05.213) 0:00:38.486 ******* 2026-03-11 00:36:04.679080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:36:04.679094 | orchestrator | 2026-03-11 00:36:04.679106 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2026-03-11 00:36:04.679119 | orchestrator | Wednesday 11 March 2026 00:35:53 +0000 (0:00:01.072) 0:00:39.559 ******* 2026-03-11 00:36:04.679132 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:04.679146 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:36:04.679158 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:36:04.679171 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:36:04.679183 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:36:04.679196 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:36:04.679209 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:36:04.679220 | orchestrator | 2026-03-11 00:36:04.679232 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2026-03-11 00:36:04.679244 | orchestrator | Wednesday 11 March 2026 00:35:54 +0000 (0:00:01.027) 0:00:40.586 ******* 2026-03-11 00:36:04.679256 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:04.679269 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:04.679281 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:04.679293 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:04.679305 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:04.679317 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:04.679347 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:04.679360 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:04.679372 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:36:04.679385 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:04.679397 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:04.679408 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:04.679420 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:36:04.679433 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:04.679444 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:04.679456 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:04.679468 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:04.679480 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:04.679516 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:36:04.679529 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:04.679540 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:04.679552 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:04.679564 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:04.679577 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:36:04.679589 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:04.679602 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:04.679614 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:04.679625 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:36:04.679637 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:04.679649 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:36:04.679661 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2026-03-11 00:36:04.679673 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2026-03-11 00:36:04.679684 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2026-03-11 00:36:04.679696 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2026-03-11 00:36:04.679707 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:36:04.679719 | orchestrator | 2026-03-11 00:36:04.679732 | orchestrator | TASK [osism.commons.network : Include network extra init] ********************** 2026-03-11 00:36:04.679769 | orchestrator | Wednesday 11 March 2026 00:35:55 +0000 (0:00:00.720) 0:00:41.307 ******* 2026-03-11 00:36:04.679786 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/network-extra-init.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:36:04.679859 | orchestrator | 2026-03-11 00:36:04.679874 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init script] **************** 2026-03-11 00:36:04.679887 | orchestrator | Wednesday 11 March 2026 00:35:56 +0000 (0:00:01.063) 0:00:42.371 ******* 2026-03-11 00:36:04.679900 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:36:04.679912 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:36:04.679924 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:36:04.679936 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:36:04.679947 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:36:04.679959 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:36:04.679971 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:36:04.679983 | orchestrator | 2026-03-11 00:36:04.679994 | orchestrator | TASK [osism.commons.network : Deploy network-extra-init systemd service] ******* 2026-03-11 00:36:04.680006 | orchestrator | Wednesday 11 March 2026 00:35:57 +0000 (0:00:00.559) 0:00:42.930 ******* 2026-03-11 00:36:04.680018 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:36:04.680030 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:36:04.680043 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:36:04.680055 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:36:04.680068 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:36:04.680080 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:36:04.680093 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:36:04.680105 | orchestrator | 2026-03-11 00:36:04.680117 | orchestrator | TASK [osism.commons.network : Enable and start network-extra-init service] ***** 2026-03-11 00:36:04.680130 | orchestrator | Wednesday 11 March 2026 00:35:57 +0000 (0:00:00.696) 0:00:43.627 ******* 2026-03-11 00:36:04.680142 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:36:04.680173 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:36:04.680186 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:36:04.680198 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:36:04.680209 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:36:04.680221 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:36:04.680232 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:36:04.680244 | orchestrator | 2026-03-11 00:36:04.680256 | orchestrator | TASK [osism.commons.network : Disable and stop network-extra-init service] ***** 2026-03-11 00:36:04.680267 | orchestrator | Wednesday 11 March 2026 00:35:58 +0000 (0:00:00.518) 0:00:44.145 ******* 2026-03-11 00:36:04.680279 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:04.680291 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:36:04.680314 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:36:04.680326 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:36:04.680338 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:36:04.680350 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:36:04.680363 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:36:04.680375 | orchestrator | 2026-03-11 00:36:04.680387 | orchestrator | TASK [osism.commons.network : Remove network-extra-init systemd service] ******* 2026-03-11 00:36:04.680400 | orchestrator | Wednesday 11 March 2026 00:36:00 +0000 (0:00:01.607) 0:00:45.753 ******* 2026-03-11 00:36:04.680411 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:04.680424 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:36:04.680435 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:36:04.680447 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:36:04.680459 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:36:04.680470 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:36:04.680481 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:36:04.680493 | orchestrator | 2026-03-11 00:36:04.680505 | orchestrator | TASK [osism.commons.network : Remove network-extra-init script] **************** 2026-03-11 00:36:04.680518 | orchestrator | Wednesday 11 March 2026 00:36:01 +0000 (0:00:00.980) 0:00:46.734 ******* 2026-03-11 00:36:04.680529 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:04.680541 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:36:04.680552 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:36:04.680564 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:36:04.680576 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:36:04.680588 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:36:04.680600 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:36:04.680611 | orchestrator | 2026-03-11 00:36:04.680623 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2026-03-11 00:36:04.680635 | orchestrator | Wednesday 11 March 2026 00:36:03 +0000 (0:00:02.229) 0:00:48.964 ******* 2026-03-11 00:36:04.680646 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:36:04.680659 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:36:04.680672 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:36:04.680684 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:36:04.680696 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:36:04.680709 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:36:04.680723 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:36:04.680734 | orchestrator | 2026-03-11 00:36:04.680747 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2026-03-11 00:36:04.680760 | orchestrator | Wednesday 11 March 2026 00:36:04 +0000 (0:00:00.824) 0:00:49.788 ******* 2026-03-11 00:36:04.680773 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:36:04.680785 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:36:04.680798 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:36:04.680841 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:36:04.680854 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:36:04.680866 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:36:04.680879 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:36:04.680891 | orchestrator | 2026-03-11 00:36:04.680904 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:36:04.680918 | orchestrator | testbed-manager : ok=25  changed=5  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-11 00:36:04.680947 | orchestrator | testbed-node-0 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-11 00:36:04.680977 | orchestrator | testbed-node-1 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-11 00:36:05.012446 | orchestrator | testbed-node-2 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-11 00:36:05.012546 | orchestrator | testbed-node-3 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-11 00:36:05.012560 | orchestrator | testbed-node-4 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-11 00:36:05.012572 | orchestrator | testbed-node-5 : ok=24  changed=5  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-11 00:36:05.012583 | orchestrator | 2026-03-11 00:36:05.012594 | orchestrator | 2026-03-11 00:36:05.012606 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:36:05.012618 | orchestrator | Wednesday 11 March 2026 00:36:04 +0000 (0:00:00.552) 0:00:50.341 ******* 2026-03-11 00:36:05.012628 | orchestrator | =============================================================================== 2026-03-11 00:36:05.012639 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.21s 2026-03-11 00:36:05.012650 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.84s 2026-03-11 00:36:05.012661 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.32s 2026-03-11 00:36:05.012672 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.22s 2026-03-11 00:36:05.012683 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.29s 2026-03-11 00:36:05.012693 | orchestrator | osism.commons.network : Remove network-extra-init script ---------------- 2.23s 2026-03-11 00:36:05.012704 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.00s 2026-03-11 00:36:05.012715 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.94s 2026-03-11 00:36:05.012726 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.64s 2026-03-11 00:36:05.012736 | orchestrator | osism.commons.network : Disable and stop network-extra-init service ----- 1.61s 2026-03-11 00:36:05.012764 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.59s 2026-03-11 00:36:05.012776 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.54s 2026-03-11 00:36:05.012786 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.24s 2026-03-11 00:36:05.012797 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.19s 2026-03-11 00:36:05.012836 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.18s 2026-03-11 00:36:05.012847 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.12s 2026-03-11 00:36:05.012857 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.07s 2026-03-11 00:36:05.012868 | orchestrator | osism.commons.network : Include network extra init ---------------------- 1.06s 2026-03-11 00:36:05.012879 | orchestrator | osism.commons.network : List existing configuration files --------------- 1.03s 2026-03-11 00:36:05.012889 | orchestrator | osism.commons.network : Remove network-extra-init systemd service ------- 0.98s 2026-03-11 00:36:05.313287 | orchestrator | + osism apply wireguard 2026-03-11 00:36:17.323863 | orchestrator | 2026-03-11 00:36:17 | INFO  | Prepare task for execution of wireguard. 2026-03-11 00:36:17.390275 | orchestrator | 2026-03-11 00:36:17 | INFO  | Task d0047682-5000-403d-99d1-f7329b064c83 (wireguard) was prepared for execution. 2026-03-11 00:36:17.390405 | orchestrator | 2026-03-11 00:36:17 | INFO  | It takes a moment until task d0047682-5000-403d-99d1-f7329b064c83 (wireguard) has been started and output is visible here. 2026-03-11 00:36:35.842720 | orchestrator | 2026-03-11 00:36:35.842943 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2026-03-11 00:36:35.842959 | orchestrator | 2026-03-11 00:36:35.842970 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2026-03-11 00:36:35.842985 | orchestrator | Wednesday 11 March 2026 00:36:21 +0000 (0:00:00.188) 0:00:00.188 ******* 2026-03-11 00:36:35.842999 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:35.843010 | orchestrator | 2026-03-11 00:36:35.843020 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2026-03-11 00:36:35.843031 | orchestrator | Wednesday 11 March 2026 00:36:22 +0000 (0:00:01.191) 0:00:01.380 ******* 2026-03-11 00:36:35.843043 | orchestrator | changed: [testbed-manager] 2026-03-11 00:36:35.843055 | orchestrator | 2026-03-11 00:36:35.843065 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2026-03-11 00:36:35.843072 | orchestrator | Wednesday 11 March 2026 00:36:29 +0000 (0:00:06.725) 0:00:08.106 ******* 2026-03-11 00:36:35.843079 | orchestrator | changed: [testbed-manager] 2026-03-11 00:36:35.843085 | orchestrator | 2026-03-11 00:36:35.843092 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2026-03-11 00:36:35.843098 | orchestrator | Wednesday 11 March 2026 00:36:29 +0000 (0:00:00.557) 0:00:08.663 ******* 2026-03-11 00:36:35.843105 | orchestrator | changed: [testbed-manager] 2026-03-11 00:36:35.843111 | orchestrator | 2026-03-11 00:36:35.843117 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2026-03-11 00:36:35.843124 | orchestrator | Wednesday 11 March 2026 00:36:30 +0000 (0:00:00.440) 0:00:09.104 ******* 2026-03-11 00:36:35.843130 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:35.843136 | orchestrator | 2026-03-11 00:36:35.843142 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2026-03-11 00:36:35.843148 | orchestrator | Wednesday 11 March 2026 00:36:30 +0000 (0:00:00.543) 0:00:09.647 ******* 2026-03-11 00:36:35.843155 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:35.843161 | orchestrator | 2026-03-11 00:36:35.843167 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2026-03-11 00:36:35.843173 | orchestrator | Wednesday 11 March 2026 00:36:31 +0000 (0:00:00.359) 0:00:10.007 ******* 2026-03-11 00:36:35.843179 | orchestrator | ok: [testbed-manager] 2026-03-11 00:36:35.843186 | orchestrator | 2026-03-11 00:36:35.843192 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2026-03-11 00:36:35.843198 | orchestrator | Wednesday 11 March 2026 00:36:31 +0000 (0:00:00.370) 0:00:10.378 ******* 2026-03-11 00:36:35.843204 | orchestrator | changed: [testbed-manager] 2026-03-11 00:36:35.843211 | orchestrator | 2026-03-11 00:36:35.843217 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2026-03-11 00:36:35.843223 | orchestrator | Wednesday 11 March 2026 00:36:32 +0000 (0:00:01.019) 0:00:11.398 ******* 2026-03-11 00:36:35.843230 | orchestrator | changed: [testbed-manager] => (item=None) 2026-03-11 00:36:35.843236 | orchestrator | changed: [testbed-manager] 2026-03-11 00:36:35.843242 | orchestrator | 2026-03-11 00:36:35.843248 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2026-03-11 00:36:35.843255 | orchestrator | Wednesday 11 March 2026 00:36:33 +0000 (0:00:00.815) 0:00:12.213 ******* 2026-03-11 00:36:35.843262 | orchestrator | changed: [testbed-manager] 2026-03-11 00:36:35.843270 | orchestrator | 2026-03-11 00:36:35.843277 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2026-03-11 00:36:35.843284 | orchestrator | Wednesday 11 March 2026 00:36:34 +0000 (0:00:01.536) 0:00:13.750 ******* 2026-03-11 00:36:35.843292 | orchestrator | changed: [testbed-manager] 2026-03-11 00:36:35.843299 | orchestrator | 2026-03-11 00:36:35.843306 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:36:35.843337 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:36:35.843346 | orchestrator | 2026-03-11 00:36:35.843353 | orchestrator | 2026-03-11 00:36:35.843361 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:36:35.843368 | orchestrator | Wednesday 11 March 2026 00:36:35 +0000 (0:00:00.849) 0:00:14.599 ******* 2026-03-11 00:36:35.843375 | orchestrator | =============================================================================== 2026-03-11 00:36:35.843382 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.73s 2026-03-11 00:36:35.843389 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.54s 2026-03-11 00:36:35.843397 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.19s 2026-03-11 00:36:35.843404 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.02s 2026-03-11 00:36:35.843412 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.85s 2026-03-11 00:36:35.843419 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.82s 2026-03-11 00:36:35.843426 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.56s 2026-03-11 00:36:35.843448 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.54s 2026-03-11 00:36:35.843455 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.44s 2026-03-11 00:36:35.843462 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.37s 2026-03-11 00:36:35.843469 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.36s 2026-03-11 00:36:36.042248 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2026-03-11 00:36:36.073564 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2026-03-11 00:36:36.073689 | orchestrator | Dload Upload Total Spent Left Speed 2026-03-11 00:36:36.154975 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 183 0 --:--:-- --:--:-- --:--:-- 185 2026-03-11 00:36:36.168303 | orchestrator | + osism apply --environment custom workarounds 2026-03-11 00:36:37.927504 | orchestrator | 2026-03-11 00:36:37 | INFO  | Trying to run play workarounds in environment custom 2026-03-11 00:36:48.005836 | orchestrator | 2026-03-11 00:36:48 | INFO  | Prepare task for execution of workarounds. 2026-03-11 00:36:48.069958 | orchestrator | 2026-03-11 00:36:48 | INFO  | Task 99f7559b-93ea-491a-af6b-9a9f1af18747 (workarounds) was prepared for execution. 2026-03-11 00:36:48.070109 | orchestrator | 2026-03-11 00:36:48 | INFO  | It takes a moment until task 99f7559b-93ea-491a-af6b-9a9f1af18747 (workarounds) has been started and output is visible here. 2026-03-11 00:37:11.418635 | orchestrator | 2026-03-11 00:37:11.418800 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:37:11.418819 | orchestrator | 2026-03-11 00:37:11.418832 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2026-03-11 00:37:11.418843 | orchestrator | Wednesday 11 March 2026 00:36:51 +0000 (0:00:00.124) 0:00:00.124 ******* 2026-03-11 00:37:11.418855 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2026-03-11 00:37:11.418866 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2026-03-11 00:37:11.418877 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2026-03-11 00:37:11.418888 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2026-03-11 00:37:11.418899 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2026-03-11 00:37:11.418910 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2026-03-11 00:37:11.418921 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2026-03-11 00:37:11.418958 | orchestrator | 2026-03-11 00:37:11.418970 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2026-03-11 00:37:11.418980 | orchestrator | 2026-03-11 00:37:11.418991 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-11 00:37:11.419002 | orchestrator | Wednesday 11 March 2026 00:36:52 +0000 (0:00:00.671) 0:00:00.795 ******* 2026-03-11 00:37:11.419013 | orchestrator | ok: [testbed-manager] 2026-03-11 00:37:11.419025 | orchestrator | 2026-03-11 00:37:11.419036 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2026-03-11 00:37:11.419046 | orchestrator | 2026-03-11 00:37:11.419057 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2026-03-11 00:37:11.419067 | orchestrator | Wednesday 11 March 2026 00:36:54 +0000 (0:00:02.083) 0:00:02.878 ******* 2026-03-11 00:37:11.419078 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:37:11.419089 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:37:11.419099 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:37:11.419110 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:37:11.419120 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:37:11.419131 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:37:11.419141 | orchestrator | 2026-03-11 00:37:11.419152 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2026-03-11 00:37:11.419162 | orchestrator | 2026-03-11 00:37:11.419174 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2026-03-11 00:37:11.419186 | orchestrator | Wednesday 11 March 2026 00:36:56 +0000 (0:00:02.015) 0:00:04.893 ******* 2026-03-11 00:37:11.419199 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-11 00:37:11.419213 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-11 00:37:11.419225 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-11 00:37:11.419237 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-11 00:37:11.419250 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-11 00:37:11.419276 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2026-03-11 00:37:11.419289 | orchestrator | 2026-03-11 00:37:11.419301 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2026-03-11 00:37:11.419313 | orchestrator | Wednesday 11 March 2026 00:36:58 +0000 (0:00:01.624) 0:00:06.518 ******* 2026-03-11 00:37:11.419326 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:37:11.419338 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:37:11.419351 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:37:11.419362 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:37:11.419382 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:37:11.419402 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:37:11.419419 | orchestrator | 2026-03-11 00:37:11.419439 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2026-03-11 00:37:11.419460 | orchestrator | Wednesday 11 March 2026 00:37:02 +0000 (0:00:03.873) 0:00:10.392 ******* 2026-03-11 00:37:11.419480 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:37:11.419501 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:37:11.419520 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:37:11.419535 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:37:11.419546 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:37:11.419557 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:37:11.419567 | orchestrator | 2026-03-11 00:37:11.419578 | orchestrator | PLAY [Add a workaround service] ************************************************ 2026-03-11 00:37:11.419588 | orchestrator | 2026-03-11 00:37:11.419599 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2026-03-11 00:37:11.419609 | orchestrator | Wednesday 11 March 2026 00:37:02 +0000 (0:00:00.538) 0:00:10.931 ******* 2026-03-11 00:37:11.419632 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:37:11.419704 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:37:11.419724 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:37:11.419742 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:37:11.419759 | orchestrator | changed: [testbed-manager] 2026-03-11 00:37:11.419776 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:37:11.419794 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:37:11.419809 | orchestrator | 2026-03-11 00:37:11.419826 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2026-03-11 00:37:11.419843 | orchestrator | Wednesday 11 March 2026 00:37:03 +0000 (0:00:01.411) 0:00:12.342 ******* 2026-03-11 00:37:11.419861 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:37:11.419878 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:37:11.419895 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:37:11.419913 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:37:11.419932 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:37:11.419949 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:37:11.419992 | orchestrator | changed: [testbed-manager] 2026-03-11 00:37:11.420011 | orchestrator | 2026-03-11 00:37:11.420030 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2026-03-11 00:37:11.420048 | orchestrator | Wednesday 11 March 2026 00:37:05 +0000 (0:00:01.297) 0:00:13.639 ******* 2026-03-11 00:37:11.420067 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:37:11.420078 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:37:11.420089 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:37:11.420100 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:37:11.420110 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:37:11.420121 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:37:11.420131 | orchestrator | ok: [testbed-manager] 2026-03-11 00:37:11.420142 | orchestrator | 2026-03-11 00:37:11.420153 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2026-03-11 00:37:11.420164 | orchestrator | Wednesday 11 March 2026 00:37:06 +0000 (0:00:01.339) 0:00:14.978 ******* 2026-03-11 00:37:11.420174 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:37:11.420185 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:37:11.420196 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:37:11.420206 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:37:11.420217 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:37:11.420227 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:37:11.420238 | orchestrator | changed: [testbed-manager] 2026-03-11 00:37:11.420248 | orchestrator | 2026-03-11 00:37:11.420259 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2026-03-11 00:37:11.420269 | orchestrator | Wednesday 11 March 2026 00:37:08 +0000 (0:00:01.514) 0:00:16.492 ******* 2026-03-11 00:37:11.420280 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:37:11.420290 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:37:11.420301 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:37:11.420311 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:37:11.420321 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:37:11.420332 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:37:11.420342 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:37:11.420353 | orchestrator | 2026-03-11 00:37:11.420363 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2026-03-11 00:37:11.420374 | orchestrator | 2026-03-11 00:37:11.420385 | orchestrator | TASK [Install python3-docker] ************************************************** 2026-03-11 00:37:11.420396 | orchestrator | Wednesday 11 March 2026 00:37:08 +0000 (0:00:00.559) 0:00:17.052 ******* 2026-03-11 00:37:11.420406 | orchestrator | ok: [testbed-manager] 2026-03-11 00:37:11.420417 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:37:11.420432 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:37:11.420450 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:37:11.420467 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:37:11.420486 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:37:11.420523 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:37:11.420541 | orchestrator | 2026-03-11 00:37:11.420553 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:37:11.420565 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:37:11.420577 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:11.420588 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:11.420608 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:11.420619 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:11.420630 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:11.420641 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:11.420686 | orchestrator | 2026-03-11 00:37:11.420704 | orchestrator | 2026-03-11 00:37:11.420715 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:37:11.420726 | orchestrator | Wednesday 11 March 2026 00:37:11 +0000 (0:00:02.697) 0:00:19.750 ******* 2026-03-11 00:37:11.420737 | orchestrator | =============================================================================== 2026-03-11 00:37:11.420747 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.87s 2026-03-11 00:37:11.420758 | orchestrator | Install python3-docker -------------------------------------------------- 2.70s 2026-03-11 00:37:11.420769 | orchestrator | Apply netplan configuration --------------------------------------------- 2.08s 2026-03-11 00:37:11.420779 | orchestrator | Apply netplan configuration --------------------------------------------- 2.02s 2026-03-11 00:37:11.420790 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.63s 2026-03-11 00:37:11.420800 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.51s 2026-03-11 00:37:11.420811 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.41s 2026-03-11 00:37:11.420822 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.34s 2026-03-11 00:37:11.420832 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.30s 2026-03-11 00:37:11.420843 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.67s 2026-03-11 00:37:11.420854 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.56s 2026-03-11 00:37:11.420874 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.54s 2026-03-11 00:37:11.796107 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2026-03-11 00:37:23.764362 | orchestrator | 2026-03-11 00:37:23 | INFO  | Prepare task for execution of reboot. 2026-03-11 00:37:23.844504 | orchestrator | 2026-03-11 00:37:23 | INFO  | Task af3d8e1b-decc-48a6-afdb-06f941ce9f40 (reboot) was prepared for execution. 2026-03-11 00:37:23.844603 | orchestrator | 2026-03-11 00:37:23 | INFO  | It takes a moment until task af3d8e1b-decc-48a6-afdb-06f941ce9f40 (reboot) has been started and output is visible here. 2026-03-11 00:37:33.946127 | orchestrator | 2026-03-11 00:37:33.946258 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-11 00:37:33.946277 | orchestrator | 2026-03-11 00:37:33.946289 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-11 00:37:33.947179 | orchestrator | Wednesday 11 March 2026 00:37:28 +0000 (0:00:00.239) 0:00:00.239 ******* 2026-03-11 00:37:33.947270 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:37:33.947286 | orchestrator | 2026-03-11 00:37:33.947299 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-11 00:37:33.947310 | orchestrator | Wednesday 11 March 2026 00:37:28 +0000 (0:00:00.093) 0:00:00.333 ******* 2026-03-11 00:37:33.947321 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:37:33.947332 | orchestrator | 2026-03-11 00:37:33.947343 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-11 00:37:33.947354 | orchestrator | Wednesday 11 March 2026 00:37:29 +0000 (0:00:00.942) 0:00:01.275 ******* 2026-03-11 00:37:33.947365 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:37:33.947376 | orchestrator | 2026-03-11 00:37:33.947387 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-11 00:37:33.947397 | orchestrator | 2026-03-11 00:37:33.947408 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-11 00:37:33.947419 | orchestrator | Wednesday 11 March 2026 00:37:29 +0000 (0:00:00.127) 0:00:01.403 ******* 2026-03-11 00:37:33.947430 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:37:33.947441 | orchestrator | 2026-03-11 00:37:33.947452 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-11 00:37:33.947462 | orchestrator | Wednesday 11 March 2026 00:37:29 +0000 (0:00:00.103) 0:00:01.507 ******* 2026-03-11 00:37:33.947473 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:37:33.947484 | orchestrator | 2026-03-11 00:37:33.947495 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-11 00:37:33.947506 | orchestrator | Wednesday 11 March 2026 00:37:30 +0000 (0:00:00.685) 0:00:02.192 ******* 2026-03-11 00:37:33.947517 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:37:33.947528 | orchestrator | 2026-03-11 00:37:33.947539 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-11 00:37:33.947550 | orchestrator | 2026-03-11 00:37:33.947561 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-11 00:37:33.947572 | orchestrator | Wednesday 11 March 2026 00:37:30 +0000 (0:00:00.116) 0:00:02.308 ******* 2026-03-11 00:37:33.947582 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:37:33.947593 | orchestrator | 2026-03-11 00:37:33.947633 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-11 00:37:33.947644 | orchestrator | Wednesday 11 March 2026 00:37:30 +0000 (0:00:00.246) 0:00:02.555 ******* 2026-03-11 00:37:33.947674 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:37:33.947685 | orchestrator | 2026-03-11 00:37:33.947696 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-11 00:37:33.947708 | orchestrator | Wednesday 11 March 2026 00:37:31 +0000 (0:00:00.665) 0:00:03.220 ******* 2026-03-11 00:37:33.947718 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:37:33.947729 | orchestrator | 2026-03-11 00:37:33.947740 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-11 00:37:33.947750 | orchestrator | 2026-03-11 00:37:33.947761 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-11 00:37:33.947772 | orchestrator | Wednesday 11 March 2026 00:37:31 +0000 (0:00:00.110) 0:00:03.331 ******* 2026-03-11 00:37:33.947783 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:37:33.947793 | orchestrator | 2026-03-11 00:37:33.947804 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-11 00:37:33.947815 | orchestrator | Wednesday 11 March 2026 00:37:31 +0000 (0:00:00.095) 0:00:03.426 ******* 2026-03-11 00:37:33.947826 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:37:33.947837 | orchestrator | 2026-03-11 00:37:33.947848 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-11 00:37:33.947859 | orchestrator | Wednesday 11 March 2026 00:37:31 +0000 (0:00:00.666) 0:00:04.092 ******* 2026-03-11 00:37:33.947869 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:37:33.947899 | orchestrator | 2026-03-11 00:37:33.947910 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-11 00:37:33.947921 | orchestrator | 2026-03-11 00:37:33.947933 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-11 00:37:33.947945 | orchestrator | Wednesday 11 March 2026 00:37:32 +0000 (0:00:00.113) 0:00:04.206 ******* 2026-03-11 00:37:33.947955 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:37:33.947966 | orchestrator | 2026-03-11 00:37:33.947977 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-11 00:37:33.947988 | orchestrator | Wednesday 11 March 2026 00:37:32 +0000 (0:00:00.095) 0:00:04.302 ******* 2026-03-11 00:37:33.947999 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:37:33.948009 | orchestrator | 2026-03-11 00:37:33.948020 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-11 00:37:33.948031 | orchestrator | Wednesday 11 March 2026 00:37:32 +0000 (0:00:00.674) 0:00:04.977 ******* 2026-03-11 00:37:33.948042 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:37:33.948053 | orchestrator | 2026-03-11 00:37:33.948064 | orchestrator | PLAY [Reboot systems] ********************************************************** 2026-03-11 00:37:33.948075 | orchestrator | 2026-03-11 00:37:33.948085 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2026-03-11 00:37:33.948096 | orchestrator | Wednesday 11 March 2026 00:37:32 +0000 (0:00:00.112) 0:00:05.089 ******* 2026-03-11 00:37:33.948107 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:37:33.948118 | orchestrator | 2026-03-11 00:37:33.948129 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2026-03-11 00:37:33.948140 | orchestrator | Wednesday 11 March 2026 00:37:33 +0000 (0:00:00.098) 0:00:05.187 ******* 2026-03-11 00:37:33.948151 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:37:33.948162 | orchestrator | 2026-03-11 00:37:33.948173 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2026-03-11 00:37:33.948184 | orchestrator | Wednesday 11 March 2026 00:37:33 +0000 (0:00:00.661) 0:00:05.849 ******* 2026-03-11 00:37:33.948220 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:37:33.948232 | orchestrator | 2026-03-11 00:37:33.948243 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:37:33.948255 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:33.948268 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:33.948278 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:33.948289 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:33.948300 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:33.948311 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:37:33.948321 | orchestrator | 2026-03-11 00:37:33.948332 | orchestrator | 2026-03-11 00:37:33.948343 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:37:33.948354 | orchestrator | Wednesday 11 March 2026 00:37:33 +0000 (0:00:00.038) 0:00:05.888 ******* 2026-03-11 00:37:33.948365 | orchestrator | =============================================================================== 2026-03-11 00:37:33.948375 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.30s 2026-03-11 00:37:33.948386 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.73s 2026-03-11 00:37:33.948397 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.62s 2026-03-11 00:37:34.160801 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2026-03-11 00:37:45.966501 | orchestrator | 2026-03-11 00:37:45 | INFO  | Prepare task for execution of wait-for-connection. 2026-03-11 00:37:46.044947 | orchestrator | 2026-03-11 00:37:46 | INFO  | Task 3757c15a-e452-4423-b746-fc5c78094f74 (wait-for-connection) was prepared for execution. 2026-03-11 00:37:46.045028 | orchestrator | 2026-03-11 00:37:46 | INFO  | It takes a moment until task 3757c15a-e452-4423-b746-fc5c78094f74 (wait-for-connection) has been started and output is visible here. 2026-03-11 00:38:01.899418 | orchestrator | 2026-03-11 00:38:01.899536 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2026-03-11 00:38:01.899617 | orchestrator | 2026-03-11 00:38:01.899636 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2026-03-11 00:38:01.899656 | orchestrator | Wednesday 11 March 2026 00:37:50 +0000 (0:00:00.208) 0:00:00.208 ******* 2026-03-11 00:38:01.899673 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:38:01.899685 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:38:01.899695 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:38:01.899705 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:38:01.899714 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:38:01.899725 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:38:01.899741 | orchestrator | 2026-03-11 00:38:01.899759 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:38:01.899776 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:38:01.899795 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:38:01.899813 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:38:01.899824 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:38:01.899834 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:38:01.899843 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:38:01.899853 | orchestrator | 2026-03-11 00:38:01.899863 | orchestrator | 2026-03-11 00:38:01.899873 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:38:01.899882 | orchestrator | Wednesday 11 March 2026 00:38:01 +0000 (0:00:11.502) 0:00:11.710 ******* 2026-03-11 00:38:01.899892 | orchestrator | =============================================================================== 2026-03-11 00:38:01.899902 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.50s 2026-03-11 00:38:02.170928 | orchestrator | + osism apply hddtemp 2026-03-11 00:38:14.112101 | orchestrator | 2026-03-11 00:38:14 | INFO  | Prepare task for execution of hddtemp. 2026-03-11 00:38:14.175624 | orchestrator | 2026-03-11 00:38:14 | INFO  | Task b145a5bb-8893-4b8f-b86d-307b481b940e (hddtemp) was prepared for execution. 2026-03-11 00:38:14.175743 | orchestrator | 2026-03-11 00:38:14 | INFO  | It takes a moment until task b145a5bb-8893-4b8f-b86d-307b481b940e (hddtemp) has been started and output is visible here. 2026-03-11 00:38:42.654530 | orchestrator | 2026-03-11 00:38:42.654615 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2026-03-11 00:38:42.654625 | orchestrator | 2026-03-11 00:38:42.654632 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2026-03-11 00:38:42.654638 | orchestrator | Wednesday 11 March 2026 00:38:18 +0000 (0:00:00.215) 0:00:00.215 ******* 2026-03-11 00:38:42.654663 | orchestrator | ok: [testbed-manager] 2026-03-11 00:38:42.654671 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:38:42.654676 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:38:42.654682 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:38:42.654687 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:38:42.654693 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:38:42.654698 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:38:42.654704 | orchestrator | 2026-03-11 00:38:42.654710 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2026-03-11 00:38:42.654715 | orchestrator | Wednesday 11 March 2026 00:38:18 +0000 (0:00:00.520) 0:00:00.736 ******* 2026-03-11 00:38:42.654722 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:38:42.654729 | orchestrator | 2026-03-11 00:38:42.654735 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2026-03-11 00:38:42.654740 | orchestrator | Wednesday 11 March 2026 00:38:19 +0000 (0:00:00.947) 0:00:01.684 ******* 2026-03-11 00:38:42.654746 | orchestrator | ok: [testbed-manager] 2026-03-11 00:38:42.654751 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:38:42.654756 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:38:42.654762 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:38:42.654767 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:38:42.654772 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:38:42.654778 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:38:42.654783 | orchestrator | 2026-03-11 00:38:42.654788 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2026-03-11 00:38:42.654794 | orchestrator | Wednesday 11 March 2026 00:38:21 +0000 (0:00:01.995) 0:00:03.679 ******* 2026-03-11 00:38:42.654799 | orchestrator | changed: [testbed-manager] 2026-03-11 00:38:42.654806 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:38:42.654812 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:38:42.654817 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:38:42.654822 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:38:42.654828 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:38:42.654833 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:38:42.654838 | orchestrator | 2026-03-11 00:38:42.654856 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2026-03-11 00:38:42.654861 | orchestrator | Wednesday 11 March 2026 00:38:22 +0000 (0:00:01.024) 0:00:04.704 ******* 2026-03-11 00:38:42.654867 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:38:42.654872 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:38:42.654878 | orchestrator | ok: [testbed-manager] 2026-03-11 00:38:42.654883 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:38:42.654888 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:38:42.654894 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:38:42.654899 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:38:42.654905 | orchestrator | 2026-03-11 00:38:42.654910 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2026-03-11 00:38:42.654916 | orchestrator | Wednesday 11 March 2026 00:38:24 +0000 (0:00:02.100) 0:00:06.805 ******* 2026-03-11 00:38:42.654921 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:38:42.654926 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:38:42.654932 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:38:42.654937 | orchestrator | changed: [testbed-manager] 2026-03-11 00:38:42.654943 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:38:42.654948 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:38:42.654953 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:38:42.654959 | orchestrator | 2026-03-11 00:38:42.654964 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2026-03-11 00:38:42.654970 | orchestrator | Wednesday 11 March 2026 00:38:25 +0000 (0:00:00.666) 0:00:07.472 ******* 2026-03-11 00:38:42.654975 | orchestrator | changed: [testbed-manager] 2026-03-11 00:38:42.654980 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:38:42.654990 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:38:42.654996 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:38:42.655002 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:38:42.655007 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:38:42.655012 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:38:42.655018 | orchestrator | 2026-03-11 00:38:42.655023 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2026-03-11 00:38:42.655028 | orchestrator | Wednesday 11 March 2026 00:38:39 +0000 (0:00:14.221) 0:00:21.693 ******* 2026-03-11 00:38:42.655034 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:38:42.655040 | orchestrator | 2026-03-11 00:38:42.655045 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2026-03-11 00:38:42.655052 | orchestrator | Wednesday 11 March 2026 00:38:40 +0000 (0:00:01.070) 0:00:22.763 ******* 2026-03-11 00:38:42.655058 | orchestrator | changed: [testbed-manager] 2026-03-11 00:38:42.655064 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:38:42.655070 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:38:42.655076 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:38:42.655083 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:38:42.655089 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:38:42.655095 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:38:42.655101 | orchestrator | 2026-03-11 00:38:42.655107 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:38:42.655113 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:38:42.655132 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:38:42.655140 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:38:42.655147 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:38:42.655153 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:38:42.655159 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:38:42.655165 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:38:42.655171 | orchestrator | 2026-03-11 00:38:42.655177 | orchestrator | 2026-03-11 00:38:42.655184 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:38:42.655190 | orchestrator | Wednesday 11 March 2026 00:38:42 +0000 (0:00:01.752) 0:00:24.516 ******* 2026-03-11 00:38:42.655196 | orchestrator | =============================================================================== 2026-03-11 00:38:42.655203 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 14.22s 2026-03-11 00:38:42.655209 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 2.10s 2026-03-11 00:38:42.655215 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.00s 2026-03-11 00:38:42.655221 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.75s 2026-03-11 00:38:42.655227 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.07s 2026-03-11 00:38:42.655234 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.02s 2026-03-11 00:38:42.655245 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 0.95s 2026-03-11 00:38:42.655255 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.67s 2026-03-11 00:38:42.655261 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.52s 2026-03-11 00:38:42.846434 | orchestrator | ++ semver latest 7.1.1 2026-03-11 00:38:42.886098 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-11 00:38:42.886227 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-11 00:38:42.886255 | orchestrator | + sudo systemctl restart manager.service 2026-03-11 00:38:56.445871 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2026-03-11 00:38:56.445939 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2026-03-11 00:38:56.445946 | orchestrator | + local max_attempts=60 2026-03-11 00:38:56.445951 | orchestrator | + local name=ceph-ansible 2026-03-11 00:38:56.445956 | orchestrator | + local attempt_num=1 2026-03-11 00:38:56.445960 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:38:56.488589 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:38:56.488676 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:38:56.488690 | orchestrator | + sleep 5 2026-03-11 00:39:01.492898 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:01.523281 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:01.523389 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:01.523407 | orchestrator | + sleep 5 2026-03-11 00:39:06.526957 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:06.561547 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:06.561687 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:06.561713 | orchestrator | + sleep 5 2026-03-11 00:39:11.564648 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:11.602829 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:11.602922 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:11.602938 | orchestrator | + sleep 5 2026-03-11 00:39:16.606626 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:16.639481 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:16.639609 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:16.639626 | orchestrator | + sleep 5 2026-03-11 00:39:21.644722 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:21.683674 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:21.683771 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:21.683786 | orchestrator | + sleep 5 2026-03-11 00:39:26.687746 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:26.725962 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:26.726106 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:26.726122 | orchestrator | + sleep 5 2026-03-11 00:39:31.731862 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:31.757995 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:31.758186 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:31.758207 | orchestrator | + sleep 5 2026-03-11 00:39:36.761885 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:36.792706 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:36.792812 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:36.792825 | orchestrator | + sleep 5 2026-03-11 00:39:41.795604 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:41.826325 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:41.826464 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:41.826488 | orchestrator | + sleep 5 2026-03-11 00:39:46.830559 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:46.867492 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:46.867592 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:46.867609 | orchestrator | + sleep 5 2026-03-11 00:39:51.870923 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:51.908081 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:51.908195 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:51.908241 | orchestrator | + sleep 5 2026-03-11 00:39:56.911677 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:39:56.945496 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2026-03-11 00:39:56.945613 | orchestrator | + (( attempt_num++ == max_attempts )) 2026-03-11 00:39:56.945639 | orchestrator | + sleep 5 2026-03-11 00:40:01.949231 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2026-03-11 00:40:01.986653 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:01.986734 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2026-03-11 00:40:01.986744 | orchestrator | + local max_attempts=60 2026-03-11 00:40:01.986752 | orchestrator | + local name=kolla-ansible 2026-03-11 00:40:01.986759 | orchestrator | + local attempt_num=1 2026-03-11 00:40:01.987346 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2026-03-11 00:40:02.021367 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:02.021442 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2026-03-11 00:40:02.021451 | orchestrator | + local max_attempts=60 2026-03-11 00:40:02.021458 | orchestrator | + local name=osism-ansible 2026-03-11 00:40:02.021464 | orchestrator | + local attempt_num=1 2026-03-11 00:40:02.021886 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2026-03-11 00:40:02.057472 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2026-03-11 00:40:02.057570 | orchestrator | + [[ true == \t\r\u\e ]] 2026-03-11 00:40:02.057586 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2026-03-11 00:40:02.215036 | orchestrator | ARA in ceph-ansible already disabled. 2026-03-11 00:40:02.353841 | orchestrator | ARA in kolla-ansible already disabled. 2026-03-11 00:40:02.525727 | orchestrator | ARA in osism-ansible already disabled. 2026-03-11 00:40:02.678369 | orchestrator | ARA in osism-kubernetes already disabled. 2026-03-11 00:40:02.678621 | orchestrator | + osism apply gather-facts 2026-03-11 00:40:14.717696 | orchestrator | 2026-03-11 00:40:14 | INFO  | Prepare task for execution of gather-facts. 2026-03-11 00:40:14.781466 | orchestrator | 2026-03-11 00:40:14 | INFO  | Task ea01e5d6-4605-4f34-ab04-2c2e5f5b0b11 (gather-facts) was prepared for execution. 2026-03-11 00:40:14.781540 | orchestrator | 2026-03-11 00:40:14 | INFO  | It takes a moment until task ea01e5d6-4605-4f34-ab04-2c2e5f5b0b11 (gather-facts) has been started and output is visible here. 2026-03-11 00:40:28.306684 | orchestrator | 2026-03-11 00:40:28.306791 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-11 00:40:28.306806 | orchestrator | 2026-03-11 00:40:28.306817 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-11 00:40:28.306828 | orchestrator | Wednesday 11 March 2026 00:40:18 +0000 (0:00:00.167) 0:00:00.167 ******* 2026-03-11 00:40:28.306838 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:40:28.306849 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:40:28.306859 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:40:28.306869 | orchestrator | ok: [testbed-manager] 2026-03-11 00:40:28.306879 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:40:28.306888 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:40:28.306898 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:40:28.306908 | orchestrator | 2026-03-11 00:40:28.306918 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-11 00:40:28.306927 | orchestrator | 2026-03-11 00:40:28.306937 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-11 00:40:28.306947 | orchestrator | Wednesday 11 March 2026 00:40:27 +0000 (0:00:09.239) 0:00:09.407 ******* 2026-03-11 00:40:28.306957 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:40:28.306968 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:40:28.306977 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:40:28.306987 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:40:28.306997 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:40:28.307006 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:40:28.307016 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:40:28.307025 | orchestrator | 2026-03-11 00:40:28.307035 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:40:28.307045 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:40:28.307082 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:40:28.307093 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:40:28.307102 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:40:28.307112 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:40:28.307122 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:40:28.307131 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 00:40:28.307141 | orchestrator | 2026-03-11 00:40:28.307150 | orchestrator | 2026-03-11 00:40:28.307160 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:40:28.307169 | orchestrator | Wednesday 11 March 2026 00:40:27 +0000 (0:00:00.547) 0:00:09.954 ******* 2026-03-11 00:40:28.307179 | orchestrator | =============================================================================== 2026-03-11 00:40:28.307188 | orchestrator | Gathers facts about hosts ----------------------------------------------- 9.24s 2026-03-11 00:40:28.307215 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.55s 2026-03-11 00:40:28.584695 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2026-03-11 00:40:28.596155 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2026-03-11 00:40:28.608242 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2026-03-11 00:40:28.618244 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2026-03-11 00:40:28.628800 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2026-03-11 00:40:28.639168 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/320-openstack-minimal.sh /usr/local/bin/deploy-openstack-minimal 2026-03-11 00:40:28.649845 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2026-03-11 00:40:28.658492 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2026-03-11 00:40:28.669501 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2026-03-11 00:40:28.685082 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade-manager.sh /usr/local/bin/upgrade-manager 2026-03-11 00:40:28.694115 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2026-03-11 00:40:28.702954 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2026-03-11 00:40:28.715466 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2026-03-11 00:40:28.735662 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2026-03-11 00:40:28.752687 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/320-openstack-minimal.sh /usr/local/bin/upgrade-openstack-minimal 2026-03-11 00:40:28.766465 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2026-03-11 00:40:28.784031 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2026-03-11 00:40:28.798253 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2026-03-11 00:40:28.813684 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2026-03-11 00:40:28.829888 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2026-03-11 00:40:28.846319 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2026-03-11 00:40:28.859956 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2026-03-11 00:40:28.871024 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2026-03-11 00:40:28.882097 | orchestrator | + [[ false == \t\r\u\e ]] 2026-03-11 00:40:29.141837 | orchestrator | ok: Runtime: 0:24:11.152253 2026-03-11 00:40:29.255576 | 2026-03-11 00:40:29.255716 | TASK [Deploy services] 2026-03-11 00:40:29.789419 | orchestrator | skipping: Conditional result was False 2026-03-11 00:40:29.807115 | 2026-03-11 00:40:29.807274 | TASK [Deploy in a nutshell] 2026-03-11 00:40:30.512447 | orchestrator | + set -e 2026-03-11 00:40:30.513644 | orchestrator | 2026-03-11 00:40:30.513680 | orchestrator | # PULL IMAGES 2026-03-11 00:40:30.513695 | orchestrator | 2026-03-11 00:40:30.513714 | orchestrator | + source /opt/configuration/scripts/include.sh 2026-03-11 00:40:30.513734 | orchestrator | ++ export INTERACTIVE=false 2026-03-11 00:40:30.513748 | orchestrator | ++ INTERACTIVE=false 2026-03-11 00:40:30.513793 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2026-03-11 00:40:30.513846 | orchestrator | ++ OSISM_APPLY_RETRY=1 2026-03-11 00:40:30.513861 | orchestrator | + source /opt/manager-vars.sh 2026-03-11 00:40:30.513873 | orchestrator | ++ export NUMBER_OF_NODES=6 2026-03-11 00:40:30.513891 | orchestrator | ++ NUMBER_OF_NODES=6 2026-03-11 00:40:30.513902 | orchestrator | ++ export CEPH_VERSION=reef 2026-03-11 00:40:30.513920 | orchestrator | ++ CEPH_VERSION=reef 2026-03-11 00:40:30.513931 | orchestrator | ++ export CONFIGURATION_VERSION=main 2026-03-11 00:40:30.513949 | orchestrator | ++ CONFIGURATION_VERSION=main 2026-03-11 00:40:30.513959 | orchestrator | ++ export MANAGER_VERSION=latest 2026-03-11 00:40:30.513973 | orchestrator | ++ MANAGER_VERSION=latest 2026-03-11 00:40:30.513984 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2026-03-11 00:40:30.513997 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2026-03-11 00:40:30.514007 | orchestrator | ++ export ARA=false 2026-03-11 00:40:30.514057 | orchestrator | ++ ARA=false 2026-03-11 00:40:30.514071 | orchestrator | ++ export DEPLOY_MODE=manager 2026-03-11 00:40:30.514083 | orchestrator | ++ DEPLOY_MODE=manager 2026-03-11 00:40:30.514093 | orchestrator | ++ export TEMPEST=true 2026-03-11 00:40:30.514103 | orchestrator | ++ TEMPEST=true 2026-03-11 00:40:30.514114 | orchestrator | ++ export IS_ZUUL=true 2026-03-11 00:40:30.514124 | orchestrator | ++ IS_ZUUL=true 2026-03-11 00:40:30.514135 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.142 2026-03-11 00:40:30.514146 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.142 2026-03-11 00:40:30.514157 | orchestrator | ++ export EXTERNAL_API=false 2026-03-11 00:40:30.514167 | orchestrator | ++ EXTERNAL_API=false 2026-03-11 00:40:30.514178 | orchestrator | ++ export IMAGE_USER=ubuntu 2026-03-11 00:40:30.514189 | orchestrator | ++ IMAGE_USER=ubuntu 2026-03-11 00:40:30.514200 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2026-03-11 00:40:30.514210 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2026-03-11 00:40:30.514221 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2026-03-11 00:40:30.514232 | orchestrator | ++ CEPH_STACK=ceph-ansible 2026-03-11 00:40:30.514243 | orchestrator | + echo 2026-03-11 00:40:30.514253 | orchestrator | + echo '# PULL IMAGES' 2026-03-11 00:40:30.514286 | orchestrator | + echo 2026-03-11 00:40:30.514313 | orchestrator | ++ semver latest 7.0.0 2026-03-11 00:40:30.565109 | orchestrator | + [[ -1 -ge 0 ]] 2026-03-11 00:40:30.565211 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2026-03-11 00:40:30.565226 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2026-03-11 00:40:32.365244 | orchestrator | 2026-03-11 00:40:32 | INFO  | Trying to run play pull-images in environment custom 2026-03-11 00:40:42.431020 | orchestrator | 2026-03-11 00:40:42 | INFO  | Prepare task for execution of pull-images. 2026-03-11 00:40:42.495514 | orchestrator | 2026-03-11 00:40:42 | INFO  | Task 46b5a0c0-df8a-42d4-847a-ea908789fc07 (pull-images) was prepared for execution. 2026-03-11 00:40:42.495641 | orchestrator | 2026-03-11 00:40:42 | INFO  | Task 46b5a0c0-df8a-42d4-847a-ea908789fc07 is running in background. No more output. Check ARA for logs. 2026-03-11 00:40:44.550170 | orchestrator | 2026-03-11 00:40:44 | INFO  | Trying to run play wipe-partitions in environment custom 2026-03-11 00:40:54.571007 | orchestrator | 2026-03-11 00:40:54 | INFO  | Prepare task for execution of wipe-partitions. 2026-03-11 00:40:54.641601 | orchestrator | 2026-03-11 00:40:54 | INFO  | Task 515a1f93-03b7-4d33-8ddc-3fb067a224ee (wipe-partitions) was prepared for execution. 2026-03-11 00:40:54.641700 | orchestrator | 2026-03-11 00:40:54 | INFO  | It takes a moment until task 515a1f93-03b7-4d33-8ddc-3fb067a224ee (wipe-partitions) has been started and output is visible here. 2026-03-11 00:41:06.485791 | orchestrator | 2026-03-11 00:41:06.485940 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2026-03-11 00:41:06.485971 | orchestrator | 2026-03-11 00:41:06.485992 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2026-03-11 00:41:06.486094 | orchestrator | Wednesday 11 March 2026 00:40:58 +0000 (0:00:00.113) 0:00:00.113 ******* 2026-03-11 00:41:06.486169 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:41:06.486267 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:41:06.486293 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:41:06.486312 | orchestrator | 2026-03-11 00:41:06.486333 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2026-03-11 00:41:06.486352 | orchestrator | Wednesday 11 March 2026 00:40:59 +0000 (0:00:00.584) 0:00:00.698 ******* 2026-03-11 00:41:06.486378 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:06.486398 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:06.486419 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:41:06.486437 | orchestrator | 2026-03-11 00:41:06.486457 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2026-03-11 00:41:06.486477 | orchestrator | Wednesday 11 March 2026 00:40:59 +0000 (0:00:00.349) 0:00:01.047 ******* 2026-03-11 00:41:06.486495 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:41:06.486514 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:41:06.486534 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:41:06.486553 | orchestrator | 2026-03-11 00:41:06.486572 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2026-03-11 00:41:06.486591 | orchestrator | Wednesday 11 March 2026 00:40:59 +0000 (0:00:00.578) 0:00:01.626 ******* 2026-03-11 00:41:06.486609 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:06.486628 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:06.486649 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:41:06.486668 | orchestrator | 2026-03-11 00:41:06.486689 | orchestrator | TASK [Check device availability] *********************************************** 2026-03-11 00:41:06.486703 | orchestrator | Wednesday 11 March 2026 00:41:00 +0000 (0:00:00.241) 0:00:01.868 ******* 2026-03-11 00:41:06.486713 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-11 00:41:06.486733 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-11 00:41:06.486751 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-11 00:41:06.486769 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-11 00:41:06.486788 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-11 00:41:06.486806 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-11 00:41:06.486824 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-11 00:41:06.486842 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-11 00:41:06.486859 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-11 00:41:06.486878 | orchestrator | 2026-03-11 00:41:06.486896 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2026-03-11 00:41:06.486914 | orchestrator | Wednesday 11 March 2026 00:41:01 +0000 (0:00:01.185) 0:00:03.053 ******* 2026-03-11 00:41:06.486932 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2026-03-11 00:41:06.486952 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2026-03-11 00:41:06.486969 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2026-03-11 00:41:06.486989 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2026-03-11 00:41:06.487008 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2026-03-11 00:41:06.487027 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2026-03-11 00:41:06.487045 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2026-03-11 00:41:06.487064 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2026-03-11 00:41:06.487083 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2026-03-11 00:41:06.487102 | orchestrator | 2026-03-11 00:41:06.487120 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2026-03-11 00:41:06.487139 | orchestrator | Wednesday 11 March 2026 00:41:02 +0000 (0:00:01.519) 0:00:04.573 ******* 2026-03-11 00:41:06.487157 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2026-03-11 00:41:06.487175 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2026-03-11 00:41:06.487219 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2026-03-11 00:41:06.487251 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2026-03-11 00:41:06.487286 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2026-03-11 00:41:06.487306 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2026-03-11 00:41:06.487326 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2026-03-11 00:41:06.487345 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2026-03-11 00:41:06.487362 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2026-03-11 00:41:06.487380 | orchestrator | 2026-03-11 00:41:06.487398 | orchestrator | TASK [Reload udev rules] ******************************************************* 2026-03-11 00:41:06.487415 | orchestrator | Wednesday 11 March 2026 00:41:05 +0000 (0:00:02.118) 0:00:06.691 ******* 2026-03-11 00:41:06.487432 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:41:06.487450 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:41:06.487466 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:41:06.487504 | orchestrator | 2026-03-11 00:41:06.487535 | orchestrator | TASK [Request device events from the kernel] *********************************** 2026-03-11 00:41:06.487555 | orchestrator | Wednesday 11 March 2026 00:41:05 +0000 (0:00:00.613) 0:00:07.305 ******* 2026-03-11 00:41:06.487573 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:41:06.487592 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:41:06.487611 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:41:06.487631 | orchestrator | 2026-03-11 00:41:06.487650 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:41:06.487671 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:41:06.487690 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:41:06.487736 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:41:06.487756 | orchestrator | 2026-03-11 00:41:06.487774 | orchestrator | 2026-03-11 00:41:06.487793 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:41:06.487812 | orchestrator | Wednesday 11 March 2026 00:41:06 +0000 (0:00:00.639) 0:00:07.945 ******* 2026-03-11 00:41:06.487831 | orchestrator | =============================================================================== 2026-03-11 00:41:06.487850 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.12s 2026-03-11 00:41:06.487870 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.52s 2026-03-11 00:41:06.487888 | orchestrator | Check device availability ----------------------------------------------- 1.19s 2026-03-11 00:41:06.487907 | orchestrator | Request device events from the kernel ----------------------------------- 0.64s 2026-03-11 00:41:06.487925 | orchestrator | Reload udev rules ------------------------------------------------------- 0.61s 2026-03-11 00:41:06.487944 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.58s 2026-03-11 00:41:06.487963 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.58s 2026-03-11 00:41:06.487981 | orchestrator | Remove all rook related logical devices --------------------------------- 0.35s 2026-03-11 00:41:06.488000 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.24s 2026-03-11 00:41:18.478409 | orchestrator | 2026-03-11 00:41:18 | INFO  | Prepare task for execution of facts. 2026-03-11 00:41:18.539995 | orchestrator | 2026-03-11 00:41:18 | INFO  | Task 88c43064-bf2a-408e-b669-65fcb606f87a (facts) was prepared for execution. 2026-03-11 00:41:18.540129 | orchestrator | 2026-03-11 00:41:18 | INFO  | It takes a moment until task 88c43064-bf2a-408e-b669-65fcb606f87a (facts) has been started and output is visible here. 2026-03-11 00:41:29.863122 | orchestrator | 2026-03-11 00:41:29.863354 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-11 00:41:29.863385 | orchestrator | 2026-03-11 00:41:29.863441 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-11 00:41:29.863462 | orchestrator | Wednesday 11 March 2026 00:41:22 +0000 (0:00:00.193) 0:00:00.193 ******* 2026-03-11 00:41:29.863480 | orchestrator | ok: [testbed-manager] 2026-03-11 00:41:29.863493 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:41:29.863504 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:41:29.863514 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:41:29.863525 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:41:29.863535 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:41:29.863545 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:41:29.863556 | orchestrator | 2026-03-11 00:41:29.863585 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-11 00:41:29.863596 | orchestrator | Wednesday 11 March 2026 00:41:23 +0000 (0:00:00.902) 0:00:01.095 ******* 2026-03-11 00:41:29.863607 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:41:29.863619 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:41:29.863630 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:41:29.863640 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:41:29.863651 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:29.863664 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:29.863676 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:41:29.863689 | orchestrator | 2026-03-11 00:41:29.863702 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-11 00:41:29.863714 | orchestrator | 2026-03-11 00:41:29.863727 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-11 00:41:29.863740 | orchestrator | Wednesday 11 March 2026 00:41:24 +0000 (0:00:01.058) 0:00:02.154 ******* 2026-03-11 00:41:29.863753 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:41:29.863766 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:41:29.863778 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:41:29.863791 | orchestrator | ok: [testbed-manager] 2026-03-11 00:41:29.863803 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:41:29.863816 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:41:29.863829 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:41:29.863841 | orchestrator | 2026-03-11 00:41:29.863853 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-11 00:41:29.863864 | orchestrator | 2026-03-11 00:41:29.863875 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-11 00:41:29.863886 | orchestrator | Wednesday 11 March 2026 00:41:29 +0000 (0:00:04.858) 0:00:07.013 ******* 2026-03-11 00:41:29.863897 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:41:29.863907 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:41:29.863918 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:41:29.863929 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:41:29.863939 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:29.863950 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:29.863960 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:41:29.863971 | orchestrator | 2026-03-11 00:41:29.863982 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:41:29.863993 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:41:29.864006 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:41:29.864017 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:41:29.864027 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:41:29.864038 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:41:29.864057 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:41:29.864068 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:41:29.864079 | orchestrator | 2026-03-11 00:41:29.864089 | orchestrator | 2026-03-11 00:41:29.864100 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:41:29.864111 | orchestrator | Wednesday 11 March 2026 00:41:29 +0000 (0:00:00.455) 0:00:07.468 ******* 2026-03-11 00:41:29.864122 | orchestrator | =============================================================================== 2026-03-11 00:41:29.864132 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.86s 2026-03-11 00:41:29.864143 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.06s 2026-03-11 00:41:29.864153 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 0.90s 2026-03-11 00:41:29.864196 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.46s 2026-03-11 00:41:31.827679 | orchestrator | 2026-03-11 00:41:31 | INFO  | Prepare task for execution of ceph-configure-lvm-volumes. 2026-03-11 00:41:31.880861 | orchestrator | 2026-03-11 00:41:31 | INFO  | Task b1e8981a-1ffe-4484-9b73-c1d3b6cd7d2c (ceph-configure-lvm-volumes) was prepared for execution. 2026-03-11 00:41:31.880959 | orchestrator | 2026-03-11 00:41:31 | INFO  | It takes a moment until task b1e8981a-1ffe-4484-9b73-c1d3b6cd7d2c (ceph-configure-lvm-volumes) has been started and output is visible here. 2026-03-11 00:41:42.009698 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-11 00:41:42.009801 | orchestrator | 2.16.14 2026-03-11 00:41:42.009830 | orchestrator | 2026-03-11 00:41:42.009853 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-11 00:41:42.009866 | orchestrator | 2026-03-11 00:41:42.009878 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-11 00:41:42.009889 | orchestrator | Wednesday 11 March 2026 00:41:35 +0000 (0:00:00.260) 0:00:00.260 ******* 2026-03-11 00:41:42.009901 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-11 00:41:42.009912 | orchestrator | 2026-03-11 00:41:42.009923 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-11 00:41:42.009934 | orchestrator | Wednesday 11 March 2026 00:41:36 +0000 (0:00:00.200) 0:00:00.460 ******* 2026-03-11 00:41:42.009946 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:41:42.009958 | orchestrator | 2026-03-11 00:41:42.009969 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.009980 | orchestrator | Wednesday 11 March 2026 00:41:36 +0000 (0:00:00.192) 0:00:00.653 ******* 2026-03-11 00:41:42.009991 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-11 00:41:42.010002 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-11 00:41:42.010013 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-11 00:41:42.010061 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-11 00:41:42.010072 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-11 00:41:42.010083 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-11 00:41:42.010094 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-11 00:41:42.010104 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-11 00:41:42.010115 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-11 00:41:42.010125 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-11 00:41:42.010190 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-11 00:41:42.010203 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-11 00:41:42.010213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-11 00:41:42.010224 | orchestrator | 2026-03-11 00:41:42.010235 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.010246 | orchestrator | Wednesday 11 March 2026 00:41:36 +0000 (0:00:00.366) 0:00:01.019 ******* 2026-03-11 00:41:42.010256 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.010267 | orchestrator | 2026-03-11 00:41:42.010278 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.010289 | orchestrator | Wednesday 11 March 2026 00:41:36 +0000 (0:00:00.176) 0:00:01.196 ******* 2026-03-11 00:41:42.010300 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.010310 | orchestrator | 2026-03-11 00:41:42.010347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.010363 | orchestrator | Wednesday 11 March 2026 00:41:36 +0000 (0:00:00.153) 0:00:01.350 ******* 2026-03-11 00:41:42.010376 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.010395 | orchestrator | 2026-03-11 00:41:42.010413 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.010430 | orchestrator | Wednesday 11 March 2026 00:41:37 +0000 (0:00:00.182) 0:00:01.533 ******* 2026-03-11 00:41:42.010447 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.010465 | orchestrator | 2026-03-11 00:41:42.010481 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.010498 | orchestrator | Wednesday 11 March 2026 00:41:37 +0000 (0:00:00.172) 0:00:01.705 ******* 2026-03-11 00:41:42.010515 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.010534 | orchestrator | 2026-03-11 00:41:42.010552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.010571 | orchestrator | Wednesday 11 March 2026 00:41:37 +0000 (0:00:00.172) 0:00:01.878 ******* 2026-03-11 00:41:42.010591 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.010610 | orchestrator | 2026-03-11 00:41:42.010629 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.010641 | orchestrator | Wednesday 11 March 2026 00:41:37 +0000 (0:00:00.193) 0:00:02.071 ******* 2026-03-11 00:41:42.010652 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.010662 | orchestrator | 2026-03-11 00:41:42.010673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.010685 | orchestrator | Wednesday 11 March 2026 00:41:37 +0000 (0:00:00.179) 0:00:02.250 ******* 2026-03-11 00:41:42.010695 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.010706 | orchestrator | 2026-03-11 00:41:42.010717 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.010728 | orchestrator | Wednesday 11 March 2026 00:41:37 +0000 (0:00:00.165) 0:00:02.416 ******* 2026-03-11 00:41:42.010739 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8) 2026-03-11 00:41:42.010750 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8) 2026-03-11 00:41:42.010761 | orchestrator | 2026-03-11 00:41:42.010772 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.010808 | orchestrator | Wednesday 11 March 2026 00:41:38 +0000 (0:00:00.403) 0:00:02.819 ******* 2026-03-11 00:41:42.010827 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_093a0f58-cc4b-4485-9e6f-5c5128ebf642) 2026-03-11 00:41:42.010845 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_093a0f58-cc4b-4485-9e6f-5c5128ebf642) 2026-03-11 00:41:42.010862 | orchestrator | 2026-03-11 00:41:42.010879 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.010911 | orchestrator | Wednesday 11 March 2026 00:41:38 +0000 (0:00:00.541) 0:00:03.360 ******* 2026-03-11 00:41:42.010929 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ae1c2658-52b8-455d-907b-e7170e3050e5) 2026-03-11 00:41:42.010950 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ae1c2658-52b8-455d-907b-e7170e3050e5) 2026-03-11 00:41:42.010969 | orchestrator | 2026-03-11 00:41:42.010987 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.011006 | orchestrator | Wednesday 11 March 2026 00:41:39 +0000 (0:00:00.535) 0:00:03.896 ******* 2026-03-11 00:41:42.011018 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8ff314bd-8772-4cae-a8e3-239e2ae43cb3) 2026-03-11 00:41:42.011028 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8ff314bd-8772-4cae-a8e3-239e2ae43cb3) 2026-03-11 00:41:42.011039 | orchestrator | 2026-03-11 00:41:42.011050 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:42.011061 | orchestrator | Wednesday 11 March 2026 00:41:40 +0000 (0:00:00.653) 0:00:04.549 ******* 2026-03-11 00:41:42.011071 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-11 00:41:42.011082 | orchestrator | 2026-03-11 00:41:42.011093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:42.011104 | orchestrator | Wednesday 11 March 2026 00:41:40 +0000 (0:00:00.302) 0:00:04.852 ******* 2026-03-11 00:41:42.011129 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-11 00:41:42.011169 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-11 00:41:42.011181 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-11 00:41:42.011193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-11 00:41:42.011203 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-11 00:41:42.011214 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-11 00:41:42.011225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-11 00:41:42.011236 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-11 00:41:42.011246 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-11 00:41:42.011257 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-11 00:41:42.011268 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-11 00:41:42.011282 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-11 00:41:42.011300 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-11 00:41:42.011325 | orchestrator | 2026-03-11 00:41:42.011346 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:42.011361 | orchestrator | Wednesday 11 March 2026 00:41:40 +0000 (0:00:00.333) 0:00:05.185 ******* 2026-03-11 00:41:42.011378 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.011395 | orchestrator | 2026-03-11 00:41:42.011411 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:42.011427 | orchestrator | Wednesday 11 March 2026 00:41:40 +0000 (0:00:00.172) 0:00:05.357 ******* 2026-03-11 00:41:42.011442 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.011458 | orchestrator | 2026-03-11 00:41:42.011473 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:42.011489 | orchestrator | Wednesday 11 March 2026 00:41:41 +0000 (0:00:00.169) 0:00:05.527 ******* 2026-03-11 00:41:42.011506 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.011536 | orchestrator | 2026-03-11 00:41:42.011554 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:42.011571 | orchestrator | Wednesday 11 March 2026 00:41:41 +0000 (0:00:00.182) 0:00:05.710 ******* 2026-03-11 00:41:42.011586 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.011603 | orchestrator | 2026-03-11 00:41:42.011619 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:42.011636 | orchestrator | Wednesday 11 March 2026 00:41:41 +0000 (0:00:00.165) 0:00:05.875 ******* 2026-03-11 00:41:42.011653 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.011671 | orchestrator | 2026-03-11 00:41:42.011697 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:42.011715 | orchestrator | Wednesday 11 March 2026 00:41:41 +0000 (0:00:00.170) 0:00:06.045 ******* 2026-03-11 00:41:42.011732 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.011748 | orchestrator | 2026-03-11 00:41:42.011766 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:42.011784 | orchestrator | Wednesday 11 March 2026 00:41:41 +0000 (0:00:00.182) 0:00:06.228 ******* 2026-03-11 00:41:42.011803 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:42.011821 | orchestrator | 2026-03-11 00:41:42.011858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:48.700595 | orchestrator | Wednesday 11 March 2026 00:41:42 +0000 (0:00:00.197) 0:00:06.425 ******* 2026-03-11 00:41:48.700711 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.700725 | orchestrator | 2026-03-11 00:41:48.700730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:48.700735 | orchestrator | Wednesday 11 March 2026 00:41:42 +0000 (0:00:00.169) 0:00:06.595 ******* 2026-03-11 00:41:48.700741 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-11 00:41:48.700749 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-11 00:41:48.700756 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-11 00:41:48.700762 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-11 00:41:48.700768 | orchestrator | 2026-03-11 00:41:48.700776 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:48.700782 | orchestrator | Wednesday 11 March 2026 00:41:42 +0000 (0:00:00.791) 0:00:07.387 ******* 2026-03-11 00:41:48.700788 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.700794 | orchestrator | 2026-03-11 00:41:48.700801 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:48.700807 | orchestrator | Wednesday 11 March 2026 00:41:43 +0000 (0:00:00.179) 0:00:07.567 ******* 2026-03-11 00:41:48.700813 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.700819 | orchestrator | 2026-03-11 00:41:48.700826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:48.700833 | orchestrator | Wednesday 11 March 2026 00:41:43 +0000 (0:00:00.181) 0:00:07.748 ******* 2026-03-11 00:41:48.700840 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.700846 | orchestrator | 2026-03-11 00:41:48.700852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:48.700858 | orchestrator | Wednesday 11 March 2026 00:41:43 +0000 (0:00:00.184) 0:00:07.933 ******* 2026-03-11 00:41:48.700864 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.700871 | orchestrator | 2026-03-11 00:41:48.700877 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-11 00:41:48.700884 | orchestrator | Wednesday 11 March 2026 00:41:43 +0000 (0:00:00.174) 0:00:08.107 ******* 2026-03-11 00:41:48.700890 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2026-03-11 00:41:48.700897 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2026-03-11 00:41:48.700903 | orchestrator | 2026-03-11 00:41:48.700909 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-11 00:41:48.700915 | orchestrator | Wednesday 11 March 2026 00:41:43 +0000 (0:00:00.152) 0:00:08.259 ******* 2026-03-11 00:41:48.700952 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.700959 | orchestrator | 2026-03-11 00:41:48.700966 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-11 00:41:48.700972 | orchestrator | Wednesday 11 March 2026 00:41:43 +0000 (0:00:00.130) 0:00:08.390 ******* 2026-03-11 00:41:48.700979 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.700985 | orchestrator | 2026-03-11 00:41:48.700993 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-11 00:41:48.700999 | orchestrator | Wednesday 11 March 2026 00:41:44 +0000 (0:00:00.102) 0:00:08.492 ******* 2026-03-11 00:41:48.701005 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.701011 | orchestrator | 2026-03-11 00:41:48.701017 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-11 00:41:48.701023 | orchestrator | Wednesday 11 March 2026 00:41:44 +0000 (0:00:00.115) 0:00:08.608 ******* 2026-03-11 00:41:48.701029 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:41:48.701035 | orchestrator | 2026-03-11 00:41:48.701042 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-11 00:41:48.701048 | orchestrator | Wednesday 11 March 2026 00:41:44 +0000 (0:00:00.112) 0:00:08.721 ******* 2026-03-11 00:41:48.701056 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71564836-6f16-509c-9c2d-06150302b625'}}) 2026-03-11 00:41:48.701063 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '20faa7ec-42ec-56bc-96e8-0b7388032f08'}}) 2026-03-11 00:41:48.701069 | orchestrator | 2026-03-11 00:41:48.701075 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-11 00:41:48.701081 | orchestrator | Wednesday 11 March 2026 00:41:44 +0000 (0:00:00.141) 0:00:08.863 ******* 2026-03-11 00:41:48.701089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71564836-6f16-509c-9c2d-06150302b625'}})  2026-03-11 00:41:48.701112 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '20faa7ec-42ec-56bc-96e8-0b7388032f08'}})  2026-03-11 00:41:48.701118 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.701125 | orchestrator | 2026-03-11 00:41:48.701190 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-11 00:41:48.701195 | orchestrator | Wednesday 11 March 2026 00:41:44 +0000 (0:00:00.118) 0:00:08.981 ******* 2026-03-11 00:41:48.701199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71564836-6f16-509c-9c2d-06150302b625'}})  2026-03-11 00:41:48.701204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '20faa7ec-42ec-56bc-96e8-0b7388032f08'}})  2026-03-11 00:41:48.701208 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.701213 | orchestrator | 2026-03-11 00:41:48.701217 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-11 00:41:48.701221 | orchestrator | Wednesday 11 March 2026 00:41:44 +0000 (0:00:00.249) 0:00:09.230 ******* 2026-03-11 00:41:48.701226 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71564836-6f16-509c-9c2d-06150302b625'}})  2026-03-11 00:41:48.701243 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '20faa7ec-42ec-56bc-96e8-0b7388032f08'}})  2026-03-11 00:41:48.701247 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.701252 | orchestrator | 2026-03-11 00:41:48.701256 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-11 00:41:48.701260 | orchestrator | Wednesday 11 March 2026 00:41:44 +0000 (0:00:00.128) 0:00:09.358 ******* 2026-03-11 00:41:48.701265 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:41:48.701269 | orchestrator | 2026-03-11 00:41:48.701273 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-11 00:41:48.701277 | orchestrator | Wednesday 11 March 2026 00:41:45 +0000 (0:00:00.117) 0:00:09.475 ******* 2026-03-11 00:41:48.701282 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:41:48.701292 | orchestrator | 2026-03-11 00:41:48.701296 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-11 00:41:48.701300 | orchestrator | Wednesday 11 March 2026 00:41:45 +0000 (0:00:00.127) 0:00:09.602 ******* 2026-03-11 00:41:48.701305 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.701309 | orchestrator | 2026-03-11 00:41:48.701322 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-11 00:41:48.701326 | orchestrator | Wednesday 11 March 2026 00:41:45 +0000 (0:00:00.110) 0:00:09.713 ******* 2026-03-11 00:41:48.701331 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.701335 | orchestrator | 2026-03-11 00:41:48.701339 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-11 00:41:48.701344 | orchestrator | Wednesday 11 March 2026 00:41:45 +0000 (0:00:00.114) 0:00:09.828 ******* 2026-03-11 00:41:48.701348 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.701352 | orchestrator | 2026-03-11 00:41:48.701357 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-11 00:41:48.701361 | orchestrator | Wednesday 11 March 2026 00:41:45 +0000 (0:00:00.118) 0:00:09.947 ******* 2026-03-11 00:41:48.701365 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 00:41:48.701369 | orchestrator |  "ceph_osd_devices": { 2026-03-11 00:41:48.701374 | orchestrator |  "sdb": { 2026-03-11 00:41:48.701378 | orchestrator |  "osd_lvm_uuid": "71564836-6f16-509c-9c2d-06150302b625" 2026-03-11 00:41:48.701383 | orchestrator |  }, 2026-03-11 00:41:48.701387 | orchestrator |  "sdc": { 2026-03-11 00:41:48.701391 | orchestrator |  "osd_lvm_uuid": "20faa7ec-42ec-56bc-96e8-0b7388032f08" 2026-03-11 00:41:48.701395 | orchestrator |  } 2026-03-11 00:41:48.701400 | orchestrator |  } 2026-03-11 00:41:48.701406 | orchestrator | } 2026-03-11 00:41:48.701413 | orchestrator | 2026-03-11 00:41:48.701419 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-11 00:41:48.701425 | orchestrator | Wednesday 11 March 2026 00:41:45 +0000 (0:00:00.130) 0:00:10.077 ******* 2026-03-11 00:41:48.701431 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.701438 | orchestrator | 2026-03-11 00:41:48.701444 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-11 00:41:48.701450 | orchestrator | Wednesday 11 March 2026 00:41:45 +0000 (0:00:00.139) 0:00:10.217 ******* 2026-03-11 00:41:48.701456 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.701462 | orchestrator | 2026-03-11 00:41:48.701468 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-11 00:41:48.701475 | orchestrator | Wednesday 11 March 2026 00:41:45 +0000 (0:00:00.134) 0:00:10.351 ******* 2026-03-11 00:41:48.701480 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:41:48.701486 | orchestrator | 2026-03-11 00:41:48.701491 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-11 00:41:48.701498 | orchestrator | Wednesday 11 March 2026 00:41:46 +0000 (0:00:00.132) 0:00:10.484 ******* 2026-03-11 00:41:48.701504 | orchestrator | changed: [testbed-node-3] => { 2026-03-11 00:41:48.701509 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-11 00:41:48.701515 | orchestrator |  "ceph_osd_devices": { 2026-03-11 00:41:48.701520 | orchestrator |  "sdb": { 2026-03-11 00:41:48.701526 | orchestrator |  "osd_lvm_uuid": "71564836-6f16-509c-9c2d-06150302b625" 2026-03-11 00:41:48.701531 | orchestrator |  }, 2026-03-11 00:41:48.701537 | orchestrator |  "sdc": { 2026-03-11 00:41:48.701544 | orchestrator |  "osd_lvm_uuid": "20faa7ec-42ec-56bc-96e8-0b7388032f08" 2026-03-11 00:41:48.701550 | orchestrator |  } 2026-03-11 00:41:48.701556 | orchestrator |  }, 2026-03-11 00:41:48.701561 | orchestrator |  "lvm_volumes": [ 2026-03-11 00:41:48.701567 | orchestrator |  { 2026-03-11 00:41:48.701573 | orchestrator |  "data": "osd-block-71564836-6f16-509c-9c2d-06150302b625", 2026-03-11 00:41:48.701579 | orchestrator |  "data_vg": "ceph-71564836-6f16-509c-9c2d-06150302b625" 2026-03-11 00:41:48.701591 | orchestrator |  }, 2026-03-11 00:41:48.701597 | orchestrator |  { 2026-03-11 00:41:48.701603 | orchestrator |  "data": "osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08", 2026-03-11 00:41:48.701609 | orchestrator |  "data_vg": "ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08" 2026-03-11 00:41:48.701615 | orchestrator |  } 2026-03-11 00:41:48.701621 | orchestrator |  ] 2026-03-11 00:41:48.701627 | orchestrator |  } 2026-03-11 00:41:48.701634 | orchestrator | } 2026-03-11 00:41:48.701640 | orchestrator | 2026-03-11 00:41:48.701646 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-11 00:41:48.701652 | orchestrator | Wednesday 11 March 2026 00:41:46 +0000 (0:00:00.376) 0:00:10.861 ******* 2026-03-11 00:41:48.701659 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-11 00:41:48.701665 | orchestrator | 2026-03-11 00:41:48.701671 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-11 00:41:48.701676 | orchestrator | 2026-03-11 00:41:48.701680 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-11 00:41:48.701684 | orchestrator | Wednesday 11 March 2026 00:41:48 +0000 (0:00:01.777) 0:00:12.638 ******* 2026-03-11 00:41:48.701688 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-11 00:41:48.701691 | orchestrator | 2026-03-11 00:41:48.701701 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-11 00:41:48.701705 | orchestrator | Wednesday 11 March 2026 00:41:48 +0000 (0:00:00.247) 0:00:12.886 ******* 2026-03-11 00:41:48.701708 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:41:48.701712 | orchestrator | 2026-03-11 00:41:48.701722 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.407880 | orchestrator | Wednesday 11 March 2026 00:41:48 +0000 (0:00:00.229) 0:00:13.115 ******* 2026-03-11 00:41:56.407998 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-11 00:41:56.408014 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-11 00:41:56.408026 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-11 00:41:56.408037 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-11 00:41:56.408047 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-11 00:41:56.408058 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-11 00:41:56.408069 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-11 00:41:56.408085 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-11 00:41:56.408096 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-11 00:41:56.408108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-11 00:41:56.408162 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-11 00:41:56.408174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-11 00:41:56.408185 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-11 00:41:56.408196 | orchestrator | 2026-03-11 00:41:56.408208 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.408220 | orchestrator | Wednesday 11 March 2026 00:41:49 +0000 (0:00:00.368) 0:00:13.484 ******* 2026-03-11 00:41:56.408230 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.408242 | orchestrator | 2026-03-11 00:41:56.408253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.408264 | orchestrator | Wednesday 11 March 2026 00:41:49 +0000 (0:00:00.181) 0:00:13.665 ******* 2026-03-11 00:41:56.408304 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.408315 | orchestrator | 2026-03-11 00:41:56.408326 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.408337 | orchestrator | Wednesday 11 March 2026 00:41:49 +0000 (0:00:00.210) 0:00:13.876 ******* 2026-03-11 00:41:56.408348 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.408358 | orchestrator | 2026-03-11 00:41:56.408369 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.408380 | orchestrator | Wednesday 11 March 2026 00:41:49 +0000 (0:00:00.206) 0:00:14.082 ******* 2026-03-11 00:41:56.408390 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.408403 | orchestrator | 2026-03-11 00:41:56.408415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.408428 | orchestrator | Wednesday 11 March 2026 00:41:49 +0000 (0:00:00.190) 0:00:14.273 ******* 2026-03-11 00:41:56.408440 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.408452 | orchestrator | 2026-03-11 00:41:56.408464 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.408476 | orchestrator | Wednesday 11 March 2026 00:41:50 +0000 (0:00:00.656) 0:00:14.929 ******* 2026-03-11 00:41:56.408488 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.408500 | orchestrator | 2026-03-11 00:41:56.408513 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.408525 | orchestrator | Wednesday 11 March 2026 00:41:50 +0000 (0:00:00.204) 0:00:15.134 ******* 2026-03-11 00:41:56.408537 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.408549 | orchestrator | 2026-03-11 00:41:56.408562 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.408574 | orchestrator | Wednesday 11 March 2026 00:41:50 +0000 (0:00:00.211) 0:00:15.345 ******* 2026-03-11 00:41:56.408586 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.408599 | orchestrator | 2026-03-11 00:41:56.408611 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.408623 | orchestrator | Wednesday 11 March 2026 00:41:51 +0000 (0:00:00.199) 0:00:15.545 ******* 2026-03-11 00:41:56.408635 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a) 2026-03-11 00:41:56.408650 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a) 2026-03-11 00:41:56.408662 | orchestrator | 2026-03-11 00:41:56.408693 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.408706 | orchestrator | Wednesday 11 March 2026 00:41:51 +0000 (0:00:00.405) 0:00:15.950 ******* 2026-03-11 00:41:56.408719 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eb5be362-3b33-4846-8138-86194f5d1a8a) 2026-03-11 00:41:56.408733 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eb5be362-3b33-4846-8138-86194f5d1a8a) 2026-03-11 00:41:56.408751 | orchestrator | 2026-03-11 00:41:56.408775 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.408798 | orchestrator | Wednesday 11 March 2026 00:41:51 +0000 (0:00:00.423) 0:00:16.374 ******* 2026-03-11 00:41:56.408815 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f36f8e1d-14c5-427c-b242-d446b19c77db) 2026-03-11 00:41:56.408832 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f36f8e1d-14c5-427c-b242-d446b19c77db) 2026-03-11 00:41:56.408849 | orchestrator | 2026-03-11 00:41:56.408867 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.408907 | orchestrator | Wednesday 11 March 2026 00:41:52 +0000 (0:00:00.393) 0:00:16.768 ******* 2026-03-11 00:41:56.408926 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_288642ce-5fa9-4bc7-a508-61d675ea6136) 2026-03-11 00:41:56.408945 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_288642ce-5fa9-4bc7-a508-61d675ea6136) 2026-03-11 00:41:56.408963 | orchestrator | 2026-03-11 00:41:56.408995 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:41:56.409006 | orchestrator | Wednesday 11 March 2026 00:41:52 +0000 (0:00:00.410) 0:00:17.178 ******* 2026-03-11 00:41:56.409017 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-11 00:41:56.409027 | orchestrator | 2026-03-11 00:41:56.409038 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:56.409048 | orchestrator | Wednesday 11 March 2026 00:41:53 +0000 (0:00:00.336) 0:00:17.514 ******* 2026-03-11 00:41:56.409059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-11 00:41:56.409070 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-11 00:41:56.409080 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-11 00:41:56.409091 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-11 00:41:56.409101 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-11 00:41:56.409112 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-11 00:41:56.409190 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-11 00:41:56.409201 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-11 00:41:56.409212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-11 00:41:56.409222 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-11 00:41:56.409233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-11 00:41:56.409243 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-11 00:41:56.409254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-11 00:41:56.409264 | orchestrator | 2026-03-11 00:41:56.409275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:56.409286 | orchestrator | Wednesday 11 March 2026 00:41:53 +0000 (0:00:00.361) 0:00:17.876 ******* 2026-03-11 00:41:56.409296 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.409307 | orchestrator | 2026-03-11 00:41:56.409317 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:56.409328 | orchestrator | Wednesday 11 March 2026 00:41:54 +0000 (0:00:00.607) 0:00:18.483 ******* 2026-03-11 00:41:56.409338 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.409349 | orchestrator | 2026-03-11 00:41:56.409360 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:56.409371 | orchestrator | Wednesday 11 March 2026 00:41:54 +0000 (0:00:00.203) 0:00:18.687 ******* 2026-03-11 00:41:56.409381 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.409392 | orchestrator | 2026-03-11 00:41:56.409403 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:56.409413 | orchestrator | Wednesday 11 March 2026 00:41:54 +0000 (0:00:00.208) 0:00:18.895 ******* 2026-03-11 00:41:56.409424 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.409434 | orchestrator | 2026-03-11 00:41:56.409445 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:56.409464 | orchestrator | Wednesday 11 March 2026 00:41:54 +0000 (0:00:00.217) 0:00:19.113 ******* 2026-03-11 00:41:56.409483 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.409502 | orchestrator | 2026-03-11 00:41:56.409519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:56.409537 | orchestrator | Wednesday 11 March 2026 00:41:54 +0000 (0:00:00.192) 0:00:19.305 ******* 2026-03-11 00:41:56.409555 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.409582 | orchestrator | 2026-03-11 00:41:56.409612 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:56.409631 | orchestrator | Wednesday 11 March 2026 00:41:55 +0000 (0:00:00.181) 0:00:19.487 ******* 2026-03-11 00:41:56.409651 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.409671 | orchestrator | 2026-03-11 00:41:56.409691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:56.409710 | orchestrator | Wednesday 11 March 2026 00:41:55 +0000 (0:00:00.201) 0:00:19.688 ******* 2026-03-11 00:41:56.409722 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:41:56.409732 | orchestrator | 2026-03-11 00:41:56.409743 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:56.409753 | orchestrator | Wednesday 11 March 2026 00:41:55 +0000 (0:00:00.193) 0:00:19.882 ******* 2026-03-11 00:41:56.409764 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-11 00:41:56.409775 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-11 00:41:56.409786 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-11 00:41:56.409797 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-11 00:41:56.409807 | orchestrator | 2026-03-11 00:41:56.409818 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:41:56.409829 | orchestrator | Wednesday 11 March 2026 00:41:56 +0000 (0:00:00.823) 0:00:20.706 ******* 2026-03-11 00:41:56.409839 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.848446 | orchestrator | 2026-03-11 00:42:02.848519 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:02.848528 | orchestrator | Wednesday 11 March 2026 00:41:56 +0000 (0:00:00.207) 0:00:20.913 ******* 2026-03-11 00:42:02.848534 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.848540 | orchestrator | 2026-03-11 00:42:02.848545 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:02.848550 | orchestrator | Wednesday 11 March 2026 00:41:56 +0000 (0:00:00.215) 0:00:21.129 ******* 2026-03-11 00:42:02.848554 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.848559 | orchestrator | 2026-03-11 00:42:02.848564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:02.848569 | orchestrator | Wednesday 11 March 2026 00:41:56 +0000 (0:00:00.198) 0:00:21.328 ******* 2026-03-11 00:42:02.848573 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.848578 | orchestrator | 2026-03-11 00:42:02.848582 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-11 00:42:02.848587 | orchestrator | Wednesday 11 March 2026 00:41:57 +0000 (0:00:00.669) 0:00:21.997 ******* 2026-03-11 00:42:02.848592 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2026-03-11 00:42:02.848596 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2026-03-11 00:42:02.848601 | orchestrator | 2026-03-11 00:42:02.848606 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-11 00:42:02.848610 | orchestrator | Wednesday 11 March 2026 00:41:57 +0000 (0:00:00.162) 0:00:22.160 ******* 2026-03-11 00:42:02.848615 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.848619 | orchestrator | 2026-03-11 00:42:02.848624 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-11 00:42:02.848628 | orchestrator | Wednesday 11 March 2026 00:41:57 +0000 (0:00:00.146) 0:00:22.307 ******* 2026-03-11 00:42:02.848633 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.848637 | orchestrator | 2026-03-11 00:42:02.848642 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-11 00:42:02.848647 | orchestrator | Wednesday 11 March 2026 00:41:58 +0000 (0:00:00.135) 0:00:22.442 ******* 2026-03-11 00:42:02.848651 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.848656 | orchestrator | 2026-03-11 00:42:02.848660 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-11 00:42:02.848665 | orchestrator | Wednesday 11 March 2026 00:41:58 +0000 (0:00:00.153) 0:00:22.596 ******* 2026-03-11 00:42:02.848687 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:42:02.848696 | orchestrator | 2026-03-11 00:42:02.848701 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-11 00:42:02.848705 | orchestrator | Wednesday 11 March 2026 00:41:58 +0000 (0:00:00.183) 0:00:22.780 ******* 2026-03-11 00:42:02.848711 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2fb06152-6c58-5f9b-bb14-a51d715c3982'}}) 2026-03-11 00:42:02.848716 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2e0b0e2c-c482-530c-847f-054ffec93e8e'}}) 2026-03-11 00:42:02.848720 | orchestrator | 2026-03-11 00:42:02.848725 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-11 00:42:02.848729 | orchestrator | Wednesday 11 March 2026 00:41:58 +0000 (0:00:00.180) 0:00:22.961 ******* 2026-03-11 00:42:02.848735 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2fb06152-6c58-5f9b-bb14-a51d715c3982'}})  2026-03-11 00:42:02.848741 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2e0b0e2c-c482-530c-847f-054ffec93e8e'}})  2026-03-11 00:42:02.848746 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.848750 | orchestrator | 2026-03-11 00:42:02.848755 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-11 00:42:02.848762 | orchestrator | Wednesday 11 March 2026 00:41:58 +0000 (0:00:00.146) 0:00:23.107 ******* 2026-03-11 00:42:02.848770 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2fb06152-6c58-5f9b-bb14-a51d715c3982'}})  2026-03-11 00:42:02.848777 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2e0b0e2c-c482-530c-847f-054ffec93e8e'}})  2026-03-11 00:42:02.848785 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.848792 | orchestrator | 2026-03-11 00:42:02.848799 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-11 00:42:02.848806 | orchestrator | Wednesday 11 March 2026 00:41:58 +0000 (0:00:00.155) 0:00:23.262 ******* 2026-03-11 00:42:02.848814 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2fb06152-6c58-5f9b-bb14-a51d715c3982'}})  2026-03-11 00:42:02.848821 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2e0b0e2c-c482-530c-847f-054ffec93e8e'}})  2026-03-11 00:42:02.848829 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.848836 | orchestrator | 2026-03-11 00:42:02.848858 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-11 00:42:02.848864 | orchestrator | Wednesday 11 March 2026 00:41:58 +0000 (0:00:00.155) 0:00:23.417 ******* 2026-03-11 00:42:02.848868 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:42:02.848873 | orchestrator | 2026-03-11 00:42:02.848877 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-11 00:42:02.848882 | orchestrator | Wednesday 11 March 2026 00:41:59 +0000 (0:00:00.135) 0:00:23.553 ******* 2026-03-11 00:42:02.848886 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:42:02.848891 | orchestrator | 2026-03-11 00:42:02.848896 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-11 00:42:02.848900 | orchestrator | Wednesday 11 March 2026 00:41:59 +0000 (0:00:00.137) 0:00:23.690 ******* 2026-03-11 00:42:02.848919 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.848924 | orchestrator | 2026-03-11 00:42:02.848929 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-11 00:42:02.848934 | orchestrator | Wednesday 11 March 2026 00:41:59 +0000 (0:00:00.312) 0:00:24.003 ******* 2026-03-11 00:42:02.848938 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.848943 | orchestrator | 2026-03-11 00:42:02.848947 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-11 00:42:02.848952 | orchestrator | Wednesday 11 March 2026 00:41:59 +0000 (0:00:00.136) 0:00:24.139 ******* 2026-03-11 00:42:02.848956 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.848966 | orchestrator | 2026-03-11 00:42:02.848970 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-11 00:42:02.848975 | orchestrator | Wednesday 11 March 2026 00:41:59 +0000 (0:00:00.131) 0:00:24.271 ******* 2026-03-11 00:42:02.848980 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 00:42:02.848984 | orchestrator |  "ceph_osd_devices": { 2026-03-11 00:42:02.848989 | orchestrator |  "sdb": { 2026-03-11 00:42:02.848994 | orchestrator |  "osd_lvm_uuid": "2fb06152-6c58-5f9b-bb14-a51d715c3982" 2026-03-11 00:42:02.848998 | orchestrator |  }, 2026-03-11 00:42:02.849004 | orchestrator |  "sdc": { 2026-03-11 00:42:02.849009 | orchestrator |  "osd_lvm_uuid": "2e0b0e2c-c482-530c-847f-054ffec93e8e" 2026-03-11 00:42:02.849015 | orchestrator |  } 2026-03-11 00:42:02.849021 | orchestrator |  } 2026-03-11 00:42:02.849026 | orchestrator | } 2026-03-11 00:42:02.849031 | orchestrator | 2026-03-11 00:42:02.849037 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-11 00:42:02.849042 | orchestrator | Wednesday 11 March 2026 00:41:59 +0000 (0:00:00.134) 0:00:24.405 ******* 2026-03-11 00:42:02.849048 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.849053 | orchestrator | 2026-03-11 00:42:02.849058 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-11 00:42:02.849064 | orchestrator | Wednesday 11 March 2026 00:42:00 +0000 (0:00:00.133) 0:00:24.539 ******* 2026-03-11 00:42:02.849069 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.849074 | orchestrator | 2026-03-11 00:42:02.849080 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-11 00:42:02.849085 | orchestrator | Wednesday 11 March 2026 00:42:00 +0000 (0:00:00.125) 0:00:24.664 ******* 2026-03-11 00:42:02.849090 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:42:02.849095 | orchestrator | 2026-03-11 00:42:02.849101 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-11 00:42:02.849142 | orchestrator | Wednesday 11 March 2026 00:42:00 +0000 (0:00:00.128) 0:00:24.793 ******* 2026-03-11 00:42:02.849148 | orchestrator | changed: [testbed-node-4] => { 2026-03-11 00:42:02.849153 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-11 00:42:02.849159 | orchestrator |  "ceph_osd_devices": { 2026-03-11 00:42:02.849164 | orchestrator |  "sdb": { 2026-03-11 00:42:02.849169 | orchestrator |  "osd_lvm_uuid": "2fb06152-6c58-5f9b-bb14-a51d715c3982" 2026-03-11 00:42:02.849178 | orchestrator |  }, 2026-03-11 00:42:02.849184 | orchestrator |  "sdc": { 2026-03-11 00:42:02.849190 | orchestrator |  "osd_lvm_uuid": "2e0b0e2c-c482-530c-847f-054ffec93e8e" 2026-03-11 00:42:02.849195 | orchestrator |  } 2026-03-11 00:42:02.849200 | orchestrator |  }, 2026-03-11 00:42:02.849206 | orchestrator |  "lvm_volumes": [ 2026-03-11 00:42:02.849211 | orchestrator |  { 2026-03-11 00:42:02.849216 | orchestrator |  "data": "osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982", 2026-03-11 00:42:02.849222 | orchestrator |  "data_vg": "ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982" 2026-03-11 00:42:02.849227 | orchestrator |  }, 2026-03-11 00:42:02.849232 | orchestrator |  { 2026-03-11 00:42:02.849237 | orchestrator |  "data": "osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e", 2026-03-11 00:42:02.849243 | orchestrator |  "data_vg": "ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e" 2026-03-11 00:42:02.849248 | orchestrator |  } 2026-03-11 00:42:02.849253 | orchestrator |  ] 2026-03-11 00:42:02.849259 | orchestrator |  } 2026-03-11 00:42:02.849264 | orchestrator | } 2026-03-11 00:42:02.849269 | orchestrator | 2026-03-11 00:42:02.849274 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-11 00:42:02.849279 | orchestrator | Wednesday 11 March 2026 00:42:00 +0000 (0:00:00.211) 0:00:25.004 ******* 2026-03-11 00:42:02.849285 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-11 00:42:02.849290 | orchestrator | 2026-03-11 00:42:02.849300 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2026-03-11 00:42:02.849306 | orchestrator | 2026-03-11 00:42:02.849311 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-11 00:42:02.849316 | orchestrator | Wednesday 11 March 2026 00:42:01 +0000 (0:00:01.033) 0:00:26.038 ******* 2026-03-11 00:42:02.849322 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-11 00:42:02.849327 | orchestrator | 2026-03-11 00:42:02.849332 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-11 00:42:02.849337 | orchestrator | Wednesday 11 March 2026 00:42:02 +0000 (0:00:00.689) 0:00:26.727 ******* 2026-03-11 00:42:02.849342 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:42:02.849348 | orchestrator | 2026-03-11 00:42:02.849353 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:02.849359 | orchestrator | Wednesday 11 March 2026 00:42:02 +0000 (0:00:00.240) 0:00:26.967 ******* 2026-03-11 00:42:02.849364 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-11 00:42:02.849369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-11 00:42:02.849375 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-11 00:42:02.849380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-11 00:42:02.849386 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-11 00:42:02.849395 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-11 00:42:09.915042 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-11 00:42:09.915148 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-11 00:42:09.915156 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-11 00:42:09.915161 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-11 00:42:09.915178 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-11 00:42:09.915183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-11 00:42:09.915187 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-11 00:42:09.915191 | orchestrator | 2026-03-11 00:42:09.915196 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:09.915201 | orchestrator | Wednesday 11 March 2026 00:42:02 +0000 (0:00:00.377) 0:00:27.345 ******* 2026-03-11 00:42:09.915205 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915210 | orchestrator | 2026-03-11 00:42:09.915215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:09.915219 | orchestrator | Wednesday 11 March 2026 00:42:03 +0000 (0:00:00.201) 0:00:27.546 ******* 2026-03-11 00:42:09.915222 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915226 | orchestrator | 2026-03-11 00:42:09.915230 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:09.915234 | orchestrator | Wednesday 11 March 2026 00:42:03 +0000 (0:00:00.191) 0:00:27.737 ******* 2026-03-11 00:42:09.915237 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915241 | orchestrator | 2026-03-11 00:42:09.915245 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:09.915249 | orchestrator | Wednesday 11 March 2026 00:42:03 +0000 (0:00:00.189) 0:00:27.927 ******* 2026-03-11 00:42:09.915255 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915259 | orchestrator | 2026-03-11 00:42:09.915262 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:09.915266 | orchestrator | Wednesday 11 March 2026 00:42:03 +0000 (0:00:00.183) 0:00:28.110 ******* 2026-03-11 00:42:09.915284 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915288 | orchestrator | 2026-03-11 00:42:09.915292 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:09.915296 | orchestrator | Wednesday 11 March 2026 00:42:03 +0000 (0:00:00.196) 0:00:28.307 ******* 2026-03-11 00:42:09.915300 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915303 | orchestrator | 2026-03-11 00:42:09.915307 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:09.915311 | orchestrator | Wednesday 11 March 2026 00:42:04 +0000 (0:00:00.162) 0:00:28.470 ******* 2026-03-11 00:42:09.915315 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915318 | orchestrator | 2026-03-11 00:42:09.915323 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:09.915327 | orchestrator | Wednesday 11 March 2026 00:42:04 +0000 (0:00:00.161) 0:00:28.632 ******* 2026-03-11 00:42:09.915330 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915334 | orchestrator | 2026-03-11 00:42:09.915338 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:09.915342 | orchestrator | Wednesday 11 March 2026 00:42:04 +0000 (0:00:00.163) 0:00:28.795 ******* 2026-03-11 00:42:09.915346 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6) 2026-03-11 00:42:09.915350 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6) 2026-03-11 00:42:09.915354 | orchestrator | 2026-03-11 00:42:09.915358 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:09.915362 | orchestrator | Wednesday 11 March 2026 00:42:05 +0000 (0:00:00.679) 0:00:29.475 ******* 2026-03-11 00:42:09.915366 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7fe845d7-e58c-4b3d-846a-c114ba83f0c4) 2026-03-11 00:42:09.915370 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7fe845d7-e58c-4b3d-846a-c114ba83f0c4) 2026-03-11 00:42:09.915373 | orchestrator | 2026-03-11 00:42:09.915377 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:09.915381 | orchestrator | Wednesday 11 March 2026 00:42:05 +0000 (0:00:00.382) 0:00:29.857 ******* 2026-03-11 00:42:09.915385 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b058385a-4b50-41f2-be6b-aeff7a6e6499) 2026-03-11 00:42:09.915388 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b058385a-4b50-41f2-be6b-aeff7a6e6499) 2026-03-11 00:42:09.915392 | orchestrator | 2026-03-11 00:42:09.915396 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:09.915400 | orchestrator | Wednesday 11 March 2026 00:42:05 +0000 (0:00:00.406) 0:00:30.264 ******* 2026-03-11 00:42:09.915403 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fc665229-5891-49fd-b2c5-1ba6ac78c628) 2026-03-11 00:42:09.915407 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fc665229-5891-49fd-b2c5-1ba6ac78c628) 2026-03-11 00:42:09.915411 | orchestrator | 2026-03-11 00:42:09.915415 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:42:09.915418 | orchestrator | Wednesday 11 March 2026 00:42:06 +0000 (0:00:00.398) 0:00:30.662 ******* 2026-03-11 00:42:09.915422 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-11 00:42:09.915426 | orchestrator | 2026-03-11 00:42:09.915430 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915444 | orchestrator | Wednesday 11 March 2026 00:42:06 +0000 (0:00:00.294) 0:00:30.957 ******* 2026-03-11 00:42:09.915448 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-11 00:42:09.915452 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-11 00:42:09.915456 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-11 00:42:09.915462 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-11 00:42:09.915471 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-11 00:42:09.915477 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-11 00:42:09.915482 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-11 00:42:09.915488 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-11 00:42:09.915494 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-11 00:42:09.915499 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-11 00:42:09.915505 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-11 00:42:09.915511 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-11 00:42:09.915517 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-11 00:42:09.915524 | orchestrator | 2026-03-11 00:42:09.915530 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915536 | orchestrator | Wednesday 11 March 2026 00:42:06 +0000 (0:00:00.304) 0:00:31.261 ******* 2026-03-11 00:42:09.915542 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915546 | orchestrator | 2026-03-11 00:42:09.915549 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915553 | orchestrator | Wednesday 11 March 2026 00:42:06 +0000 (0:00:00.152) 0:00:31.413 ******* 2026-03-11 00:42:09.915557 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915561 | orchestrator | 2026-03-11 00:42:09.915564 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915568 | orchestrator | Wednesday 11 March 2026 00:42:07 +0000 (0:00:00.161) 0:00:31.575 ******* 2026-03-11 00:42:09.915572 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915575 | orchestrator | 2026-03-11 00:42:09.915579 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915586 | orchestrator | Wednesday 11 March 2026 00:42:07 +0000 (0:00:00.167) 0:00:31.743 ******* 2026-03-11 00:42:09.915590 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915593 | orchestrator | 2026-03-11 00:42:09.915597 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915601 | orchestrator | Wednesday 11 March 2026 00:42:07 +0000 (0:00:00.167) 0:00:31.911 ******* 2026-03-11 00:42:09.915605 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915609 | orchestrator | 2026-03-11 00:42:09.915614 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915618 | orchestrator | Wednesday 11 March 2026 00:42:07 +0000 (0:00:00.159) 0:00:32.070 ******* 2026-03-11 00:42:09.915623 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915627 | orchestrator | 2026-03-11 00:42:09.915631 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915636 | orchestrator | Wednesday 11 March 2026 00:42:08 +0000 (0:00:00.536) 0:00:32.607 ******* 2026-03-11 00:42:09.915640 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915644 | orchestrator | 2026-03-11 00:42:09.915649 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915653 | orchestrator | Wednesday 11 March 2026 00:42:08 +0000 (0:00:00.182) 0:00:32.789 ******* 2026-03-11 00:42:09.915658 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915662 | orchestrator | 2026-03-11 00:42:09.915667 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915671 | orchestrator | Wednesday 11 March 2026 00:42:08 +0000 (0:00:00.191) 0:00:32.980 ******* 2026-03-11 00:42:09.915676 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-11 00:42:09.915684 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-11 00:42:09.915689 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-11 00:42:09.915693 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-11 00:42:09.915697 | orchestrator | 2026-03-11 00:42:09.915702 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915706 | orchestrator | Wednesday 11 March 2026 00:42:09 +0000 (0:00:00.575) 0:00:33.556 ******* 2026-03-11 00:42:09.915710 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915715 | orchestrator | 2026-03-11 00:42:09.915719 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915724 | orchestrator | Wednesday 11 March 2026 00:42:09 +0000 (0:00:00.201) 0:00:33.757 ******* 2026-03-11 00:42:09.915728 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915732 | orchestrator | 2026-03-11 00:42:09.915737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915741 | orchestrator | Wednesday 11 March 2026 00:42:09 +0000 (0:00:00.187) 0:00:33.944 ******* 2026-03-11 00:42:09.915745 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915750 | orchestrator | 2026-03-11 00:42:09.915754 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:42:09.915759 | orchestrator | Wednesday 11 March 2026 00:42:09 +0000 (0:00:00.191) 0:00:34.136 ******* 2026-03-11 00:42:09.915763 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:09.915767 | orchestrator | 2026-03-11 00:42:09.915775 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2026-03-11 00:42:14.008877 | orchestrator | Wednesday 11 March 2026 00:42:09 +0000 (0:00:00.194) 0:00:34.331 ******* 2026-03-11 00:42:14.008997 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2026-03-11 00:42:14.009019 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2026-03-11 00:42:14.009037 | orchestrator | 2026-03-11 00:42:14.009053 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2026-03-11 00:42:14.009067 | orchestrator | Wednesday 11 March 2026 00:42:10 +0000 (0:00:00.158) 0:00:34.489 ******* 2026-03-11 00:42:14.009083 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:14.009178 | orchestrator | 2026-03-11 00:42:14.009196 | orchestrator | TASK [Generate DB VG names] **************************************************** 2026-03-11 00:42:14.009212 | orchestrator | Wednesday 11 March 2026 00:42:10 +0000 (0:00:00.122) 0:00:34.612 ******* 2026-03-11 00:42:14.009228 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:14.009245 | orchestrator | 2026-03-11 00:42:14.009261 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2026-03-11 00:42:14.009277 | orchestrator | Wednesday 11 March 2026 00:42:10 +0000 (0:00:00.128) 0:00:34.741 ******* 2026-03-11 00:42:14.009293 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:14.009309 | orchestrator | 2026-03-11 00:42:14.009326 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2026-03-11 00:42:14.009343 | orchestrator | Wednesday 11 March 2026 00:42:10 +0000 (0:00:00.270) 0:00:35.011 ******* 2026-03-11 00:42:14.009359 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:42:14.009377 | orchestrator | 2026-03-11 00:42:14.009394 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2026-03-11 00:42:14.009411 | orchestrator | Wednesday 11 March 2026 00:42:10 +0000 (0:00:00.118) 0:00:35.129 ******* 2026-03-11 00:42:14.009429 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c12a1925-beca-5a04-a9cd-b492500b7146'}}) 2026-03-11 00:42:14.009446 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '75b18a9f-434b-5575-8ed7-e1e8868eceb5'}}) 2026-03-11 00:42:14.009463 | orchestrator | 2026-03-11 00:42:14.009480 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2026-03-11 00:42:14.009497 | orchestrator | Wednesday 11 March 2026 00:42:10 +0000 (0:00:00.154) 0:00:35.284 ******* 2026-03-11 00:42:14.009515 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c12a1925-beca-5a04-a9cd-b492500b7146'}})  2026-03-11 00:42:14.009564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '75b18a9f-434b-5575-8ed7-e1e8868eceb5'}})  2026-03-11 00:42:14.009582 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:14.009599 | orchestrator | 2026-03-11 00:42:14.009615 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2026-03-11 00:42:14.009632 | orchestrator | Wednesday 11 March 2026 00:42:10 +0000 (0:00:00.131) 0:00:35.415 ******* 2026-03-11 00:42:14.009648 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c12a1925-beca-5a04-a9cd-b492500b7146'}})  2026-03-11 00:42:14.009664 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '75b18a9f-434b-5575-8ed7-e1e8868eceb5'}})  2026-03-11 00:42:14.009680 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:14.009696 | orchestrator | 2026-03-11 00:42:14.009712 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2026-03-11 00:42:14.009729 | orchestrator | Wednesday 11 March 2026 00:42:11 +0000 (0:00:00.120) 0:00:35.536 ******* 2026-03-11 00:42:14.009745 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c12a1925-beca-5a04-a9cd-b492500b7146'}})  2026-03-11 00:42:14.009760 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '75b18a9f-434b-5575-8ed7-e1e8868eceb5'}})  2026-03-11 00:42:14.009776 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:14.009791 | orchestrator | 2026-03-11 00:42:14.009808 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2026-03-11 00:42:14.009824 | orchestrator | Wednesday 11 March 2026 00:42:11 +0000 (0:00:00.126) 0:00:35.662 ******* 2026-03-11 00:42:14.009840 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:42:14.009857 | orchestrator | 2026-03-11 00:42:14.009873 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2026-03-11 00:42:14.009889 | orchestrator | Wednesday 11 March 2026 00:42:11 +0000 (0:00:00.122) 0:00:35.785 ******* 2026-03-11 00:42:14.009905 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:42:14.009922 | orchestrator | 2026-03-11 00:42:14.009938 | orchestrator | TASK [Set DB devices config data] ********************************************** 2026-03-11 00:42:14.009954 | orchestrator | Wednesday 11 March 2026 00:42:11 +0000 (0:00:00.117) 0:00:35.902 ******* 2026-03-11 00:42:14.009970 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:14.009986 | orchestrator | 2026-03-11 00:42:14.010003 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2026-03-11 00:42:14.010112 | orchestrator | Wednesday 11 March 2026 00:42:11 +0000 (0:00:00.132) 0:00:36.034 ******* 2026-03-11 00:42:14.010134 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:14.010149 | orchestrator | 2026-03-11 00:42:14.010166 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2026-03-11 00:42:14.010183 | orchestrator | Wednesday 11 March 2026 00:42:11 +0000 (0:00:00.113) 0:00:36.148 ******* 2026-03-11 00:42:14.010199 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:14.010215 | orchestrator | 2026-03-11 00:42:14.010231 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2026-03-11 00:42:14.010248 | orchestrator | Wednesday 11 March 2026 00:42:11 +0000 (0:00:00.119) 0:00:36.267 ******* 2026-03-11 00:42:14.010264 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 00:42:14.010280 | orchestrator |  "ceph_osd_devices": { 2026-03-11 00:42:14.010296 | orchestrator |  "sdb": { 2026-03-11 00:42:14.010336 | orchestrator |  "osd_lvm_uuid": "c12a1925-beca-5a04-a9cd-b492500b7146" 2026-03-11 00:42:14.010354 | orchestrator |  }, 2026-03-11 00:42:14.010370 | orchestrator |  "sdc": { 2026-03-11 00:42:14.010408 | orchestrator |  "osd_lvm_uuid": "75b18a9f-434b-5575-8ed7-e1e8868eceb5" 2026-03-11 00:42:14.010425 | orchestrator |  } 2026-03-11 00:42:14.010441 | orchestrator |  } 2026-03-11 00:42:14.010457 | orchestrator | } 2026-03-11 00:42:14.010474 | orchestrator | 2026-03-11 00:42:14.010505 | orchestrator | TASK [Print WAL devices] ******************************************************* 2026-03-11 00:42:14.010521 | orchestrator | Wednesday 11 March 2026 00:42:11 +0000 (0:00:00.139) 0:00:36.407 ******* 2026-03-11 00:42:14.010537 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:14.010554 | orchestrator | 2026-03-11 00:42:14.010570 | orchestrator | TASK [Print DB devices] ******************************************************** 2026-03-11 00:42:14.010584 | orchestrator | Wednesday 11 March 2026 00:42:12 +0000 (0:00:00.373) 0:00:36.781 ******* 2026-03-11 00:42:14.010597 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:14.010610 | orchestrator | 2026-03-11 00:42:14.010624 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2026-03-11 00:42:14.010636 | orchestrator | Wednesday 11 March 2026 00:42:12 +0000 (0:00:00.140) 0:00:36.922 ******* 2026-03-11 00:42:14.010649 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:42:14.010662 | orchestrator | 2026-03-11 00:42:14.010675 | orchestrator | TASK [Print configuration data] ************************************************ 2026-03-11 00:42:14.010689 | orchestrator | Wednesday 11 March 2026 00:42:12 +0000 (0:00:00.171) 0:00:37.093 ******* 2026-03-11 00:42:14.010702 | orchestrator | changed: [testbed-node-5] => { 2026-03-11 00:42:14.010716 | orchestrator |  "_ceph_configure_lvm_config_data": { 2026-03-11 00:42:14.010729 | orchestrator |  "ceph_osd_devices": { 2026-03-11 00:42:14.010743 | orchestrator |  "sdb": { 2026-03-11 00:42:14.010756 | orchestrator |  "osd_lvm_uuid": "c12a1925-beca-5a04-a9cd-b492500b7146" 2026-03-11 00:42:14.010769 | orchestrator |  }, 2026-03-11 00:42:14.010782 | orchestrator |  "sdc": { 2026-03-11 00:42:14.010802 | orchestrator |  "osd_lvm_uuid": "75b18a9f-434b-5575-8ed7-e1e8868eceb5" 2026-03-11 00:42:14.010815 | orchestrator |  } 2026-03-11 00:42:14.010827 | orchestrator |  }, 2026-03-11 00:42:14.010840 | orchestrator |  "lvm_volumes": [ 2026-03-11 00:42:14.010853 | orchestrator |  { 2026-03-11 00:42:14.010866 | orchestrator |  "data": "osd-block-c12a1925-beca-5a04-a9cd-b492500b7146", 2026-03-11 00:42:14.010880 | orchestrator |  "data_vg": "ceph-c12a1925-beca-5a04-a9cd-b492500b7146" 2026-03-11 00:42:14.010893 | orchestrator |  }, 2026-03-11 00:42:14.010911 | orchestrator |  { 2026-03-11 00:42:14.010925 | orchestrator |  "data": "osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5", 2026-03-11 00:42:14.010939 | orchestrator |  "data_vg": "ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5" 2026-03-11 00:42:14.010952 | orchestrator |  } 2026-03-11 00:42:14.010964 | orchestrator |  ] 2026-03-11 00:42:14.010978 | orchestrator |  } 2026-03-11 00:42:14.010991 | orchestrator | } 2026-03-11 00:42:14.011004 | orchestrator | 2026-03-11 00:42:14.011017 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2026-03-11 00:42:14.011029 | orchestrator | Wednesday 11 March 2026 00:42:12 +0000 (0:00:00.275) 0:00:37.368 ******* 2026-03-11 00:42:14.011041 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-11 00:42:14.011054 | orchestrator | 2026-03-11 00:42:14.011067 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:42:14.011081 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 00:42:14.011118 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 00:42:14.011131 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 00:42:14.011143 | orchestrator | 2026-03-11 00:42:14.011156 | orchestrator | 2026-03-11 00:42:14.011169 | orchestrator | 2026-03-11 00:42:14.011182 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:42:14.011196 | orchestrator | Wednesday 11 March 2026 00:42:13 +0000 (0:00:01.032) 0:00:38.400 ******* 2026-03-11 00:42:14.011222 | orchestrator | =============================================================================== 2026-03-11 00:42:14.011235 | orchestrator | Write configuration file ------------------------------------------------ 3.84s 2026-03-11 00:42:14.011248 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.14s 2026-03-11 00:42:14.011261 | orchestrator | Add known links to the list of available block devices ------------------ 1.11s 2026-03-11 00:42:14.011274 | orchestrator | Add known partitions to the list of available block devices ------------- 1.00s 2026-03-11 00:42:14.011288 | orchestrator | Print configuration data ------------------------------------------------ 0.86s 2026-03-11 00:42:14.011300 | orchestrator | Add known partitions to the list of available block devices ------------- 0.82s 2026-03-11 00:42:14.011313 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2026-03-11 00:42:14.011324 | orchestrator | Add known links to the list of available block devices ------------------ 0.68s 2026-03-11 00:42:14.011332 | orchestrator | Add known partitions to the list of available block devices ------------- 0.67s 2026-03-11 00:42:14.011340 | orchestrator | Get initial list of available block devices ----------------------------- 0.66s 2026-03-11 00:42:14.011348 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2026-03-11 00:42:14.011356 | orchestrator | Add known links to the list of available block devices ------------------ 0.65s 2026-03-11 00:42:14.011363 | orchestrator | Print WAL devices ------------------------------------------------------- 0.65s 2026-03-11 00:42:14.011383 | orchestrator | Add known partitions to the list of available block devices ------------- 0.61s 2026-03-11 00:42:14.435941 | orchestrator | Add known partitions to the list of available block devices ------------- 0.58s 2026-03-11 00:42:14.436031 | orchestrator | Set DB devices config data ---------------------------------------------- 0.56s 2026-03-11 00:42:14.436041 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-03-11 00:42:14.436048 | orchestrator | Generate shared DB/WAL VG names ----------------------------------------- 0.54s 2026-03-11 00:42:14.436054 | orchestrator | Add known partitions to the list of available block devices ------------- 0.54s 2026-03-11 00:42:14.436061 | orchestrator | Add known links to the list of available block devices ------------------ 0.54s 2026-03-11 00:42:36.851247 | orchestrator | 2026-03-11 00:42:36 | INFO  | Task 18ae0792-3b67-4a68-87b1-7beb3e2f1534 (sync inventory) is running in background. Output coming soon. 2026-03-11 00:43:02.017185 | orchestrator | 2026-03-11 00:42:38 | INFO  | Starting group_vars file reorganization 2026-03-11 00:43:02.017312 | orchestrator | 2026-03-11 00:42:38 | INFO  | Moved 0 file(s) to their respective directories 2026-03-11 00:43:02.017329 | orchestrator | 2026-03-11 00:42:38 | INFO  | Group_vars file reorganization completed 2026-03-11 00:43:02.017340 | orchestrator | 2026-03-11 00:42:41 | INFO  | Starting variable preparation from inventory 2026-03-11 00:43:02.017351 | orchestrator | 2026-03-11 00:42:44 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2026-03-11 00:43:02.017362 | orchestrator | 2026-03-11 00:42:44 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2026-03-11 00:43:02.017372 | orchestrator | 2026-03-11 00:42:44 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2026-03-11 00:43:02.017383 | orchestrator | 2026-03-11 00:42:44 | INFO  | 3 file(s) written, 6 host(s) processed 2026-03-11 00:43:02.017393 | orchestrator | 2026-03-11 00:42:44 | INFO  | Variable preparation completed 2026-03-11 00:43:02.017403 | orchestrator | 2026-03-11 00:42:45 | INFO  | Starting inventory overwrite handling 2026-03-11 00:43:02.017413 | orchestrator | 2026-03-11 00:42:45 | INFO  | Handling group overwrites in 99-overwrite 2026-03-11 00:43:02.017423 | orchestrator | 2026-03-11 00:42:45 | INFO  | Removing group frr:children from 60-generic 2026-03-11 00:43:02.017468 | orchestrator | 2026-03-11 00:42:45 | INFO  | Removing group netbird:children from 50-infrastructure 2026-03-11 00:43:02.017479 | orchestrator | 2026-03-11 00:42:45 | INFO  | Removing group ceph-mds from 50-ceph 2026-03-11 00:43:02.017489 | orchestrator | 2026-03-11 00:42:45 | INFO  | Removing group ceph-rgw from 50-ceph 2026-03-11 00:43:02.017499 | orchestrator | 2026-03-11 00:42:45 | INFO  | Handling group overwrites in 20-roles 2026-03-11 00:43:02.017509 | orchestrator | 2026-03-11 00:42:45 | INFO  | Removing group k3s_node from 50-infrastructure 2026-03-11 00:43:02.017519 | orchestrator | 2026-03-11 00:42:45 | INFO  | Removed 5 group(s) in total 2026-03-11 00:43:02.017528 | orchestrator | 2026-03-11 00:42:45 | INFO  | Inventory overwrite handling completed 2026-03-11 00:43:02.017538 | orchestrator | 2026-03-11 00:42:46 | INFO  | Starting merge of inventory files 2026-03-11 00:43:02.017548 | orchestrator | 2026-03-11 00:42:46 | INFO  | Inventory files merged successfully 2026-03-11 00:43:02.017557 | orchestrator | 2026-03-11 00:42:50 | INFO  | Generating ClusterShell configuration from Ansible inventory 2026-03-11 00:43:02.017567 | orchestrator | 2026-03-11 00:43:00 | INFO  | Successfully wrote ClusterShell configuration 2026-03-11 00:43:02.017577 | orchestrator | [master 3a712e1] 2026-03-11-00-43 2026-03-11 00:43:02.017589 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2026-03-11 00:43:03.926470 | orchestrator | 2026-03-11 00:43:03 | INFO  | Prepare task for execution of ceph-create-lvm-devices. 2026-03-11 00:43:03.977960 | orchestrator | 2026-03-11 00:43:03 | INFO  | Task 0f77cca4-1bc5-4f32-a590-d150202f8aa0 (ceph-create-lvm-devices) was prepared for execution. 2026-03-11 00:43:03.978191 | orchestrator | 2026-03-11 00:43:03 | INFO  | It takes a moment until task 0f77cca4-1bc5-4f32-a590-d150202f8aa0 (ceph-create-lvm-devices) has been started and output is visible here. 2026-03-11 00:43:14.168077 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-11 00:43:14.168195 | orchestrator | 2.16.14 2026-03-11 00:43:14.168213 | orchestrator | 2026-03-11 00:43:14.168225 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-11 00:43:14.168237 | orchestrator | 2026-03-11 00:43:14.168249 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-11 00:43:14.168260 | orchestrator | Wednesday 11 March 2026 00:43:07 +0000 (0:00:00.223) 0:00:00.223 ******* 2026-03-11 00:43:14.168272 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2026-03-11 00:43:14.168284 | orchestrator | 2026-03-11 00:43:14.168296 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-11 00:43:14.168306 | orchestrator | Wednesday 11 March 2026 00:43:08 +0000 (0:00:00.207) 0:00:00.430 ******* 2026-03-11 00:43:14.168317 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:43:14.168328 | orchestrator | 2026-03-11 00:43:14.168339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.168350 | orchestrator | Wednesday 11 March 2026 00:43:08 +0000 (0:00:00.205) 0:00:00.636 ******* 2026-03-11 00:43:14.168361 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2026-03-11 00:43:14.168371 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2026-03-11 00:43:14.168382 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2026-03-11 00:43:14.168393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2026-03-11 00:43:14.168404 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2026-03-11 00:43:14.168415 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2026-03-11 00:43:14.168425 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2026-03-11 00:43:14.168461 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2026-03-11 00:43:14.168472 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2026-03-11 00:43:14.168482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2026-03-11 00:43:14.168493 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2026-03-11 00:43:14.168504 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2026-03-11 00:43:14.168530 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2026-03-11 00:43:14.168541 | orchestrator | 2026-03-11 00:43:14.168552 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.168563 | orchestrator | Wednesday 11 March 2026 00:43:08 +0000 (0:00:00.427) 0:00:01.064 ******* 2026-03-11 00:43:14.168576 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.168589 | orchestrator | 2026-03-11 00:43:14.168601 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.168614 | orchestrator | Wednesday 11 March 2026 00:43:08 +0000 (0:00:00.183) 0:00:01.247 ******* 2026-03-11 00:43:14.168626 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.168639 | orchestrator | 2026-03-11 00:43:14.168652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.168664 | orchestrator | Wednesday 11 March 2026 00:43:08 +0000 (0:00:00.166) 0:00:01.413 ******* 2026-03-11 00:43:14.168676 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.168689 | orchestrator | 2026-03-11 00:43:14.168701 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.168714 | orchestrator | Wednesday 11 March 2026 00:43:09 +0000 (0:00:00.164) 0:00:01.577 ******* 2026-03-11 00:43:14.168728 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.168747 | orchestrator | 2026-03-11 00:43:14.168785 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.168819 | orchestrator | Wednesday 11 March 2026 00:43:09 +0000 (0:00:00.168) 0:00:01.746 ******* 2026-03-11 00:43:14.168837 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.168855 | orchestrator | 2026-03-11 00:43:14.168873 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.168893 | orchestrator | Wednesday 11 March 2026 00:43:09 +0000 (0:00:00.164) 0:00:01.911 ******* 2026-03-11 00:43:14.168911 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.168928 | orchestrator | 2026-03-11 00:43:14.168945 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.168964 | orchestrator | Wednesday 11 March 2026 00:43:09 +0000 (0:00:00.204) 0:00:02.115 ******* 2026-03-11 00:43:14.168983 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.169068 | orchestrator | 2026-03-11 00:43:14.169091 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.169110 | orchestrator | Wednesday 11 March 2026 00:43:09 +0000 (0:00:00.187) 0:00:02.303 ******* 2026-03-11 00:43:14.169123 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.169134 | orchestrator | 2026-03-11 00:43:14.169145 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.169156 | orchestrator | Wednesday 11 March 2026 00:43:10 +0000 (0:00:00.182) 0:00:02.486 ******* 2026-03-11 00:43:14.169166 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8) 2026-03-11 00:43:14.169178 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8) 2026-03-11 00:43:14.169189 | orchestrator | 2026-03-11 00:43:14.169200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.169230 | orchestrator | Wednesday 11 March 2026 00:43:10 +0000 (0:00:00.396) 0:00:02.883 ******* 2026-03-11 00:43:14.169255 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_093a0f58-cc4b-4485-9e6f-5c5128ebf642) 2026-03-11 00:43:14.169268 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_093a0f58-cc4b-4485-9e6f-5c5128ebf642) 2026-03-11 00:43:14.169288 | orchestrator | 2026-03-11 00:43:14.169305 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.169323 | orchestrator | Wednesday 11 March 2026 00:43:11 +0000 (0:00:00.572) 0:00:03.455 ******* 2026-03-11 00:43:14.169341 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_ae1c2658-52b8-455d-907b-e7170e3050e5) 2026-03-11 00:43:14.169358 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_ae1c2658-52b8-455d-907b-e7170e3050e5) 2026-03-11 00:43:14.169376 | orchestrator | 2026-03-11 00:43:14.169394 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.169414 | orchestrator | Wednesday 11 March 2026 00:43:11 +0000 (0:00:00.541) 0:00:03.997 ******* 2026-03-11 00:43:14.169432 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_8ff314bd-8772-4cae-a8e3-239e2ae43cb3) 2026-03-11 00:43:14.169450 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_8ff314bd-8772-4cae-a8e3-239e2ae43cb3) 2026-03-11 00:43:14.169466 | orchestrator | 2026-03-11 00:43:14.169477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:14.169488 | orchestrator | Wednesday 11 March 2026 00:43:12 +0000 (0:00:00.697) 0:00:04.695 ******* 2026-03-11 00:43:14.169499 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-11 00:43:14.169510 | orchestrator | 2026-03-11 00:43:14.169521 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:14.169531 | orchestrator | Wednesday 11 March 2026 00:43:12 +0000 (0:00:00.287) 0:00:04.982 ******* 2026-03-11 00:43:14.169542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2026-03-11 00:43:14.169553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2026-03-11 00:43:14.169564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2026-03-11 00:43:14.169574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2026-03-11 00:43:14.169585 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2026-03-11 00:43:14.169596 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2026-03-11 00:43:14.169608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2026-03-11 00:43:14.169619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2026-03-11 00:43:14.169629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2026-03-11 00:43:14.169640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2026-03-11 00:43:14.169651 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2026-03-11 00:43:14.169662 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2026-03-11 00:43:14.169672 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2026-03-11 00:43:14.169683 | orchestrator | 2026-03-11 00:43:14.169694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:14.169705 | orchestrator | Wednesday 11 March 2026 00:43:12 +0000 (0:00:00.363) 0:00:05.345 ******* 2026-03-11 00:43:14.169715 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.169726 | orchestrator | 2026-03-11 00:43:14.169737 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:14.169748 | orchestrator | Wednesday 11 March 2026 00:43:13 +0000 (0:00:00.177) 0:00:05.523 ******* 2026-03-11 00:43:14.169768 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.169779 | orchestrator | 2026-03-11 00:43:14.169790 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:14.169801 | orchestrator | Wednesday 11 March 2026 00:43:13 +0000 (0:00:00.163) 0:00:05.686 ******* 2026-03-11 00:43:14.169811 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.169822 | orchestrator | 2026-03-11 00:43:14.169833 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:14.169844 | orchestrator | Wednesday 11 March 2026 00:43:13 +0000 (0:00:00.184) 0:00:05.870 ******* 2026-03-11 00:43:14.169854 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.169865 | orchestrator | 2026-03-11 00:43:14.169876 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:14.169887 | orchestrator | Wednesday 11 March 2026 00:43:13 +0000 (0:00:00.214) 0:00:06.085 ******* 2026-03-11 00:43:14.169897 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.169908 | orchestrator | 2026-03-11 00:43:14.169919 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:14.169941 | orchestrator | Wednesday 11 March 2026 00:43:13 +0000 (0:00:00.171) 0:00:06.256 ******* 2026-03-11 00:43:14.169953 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.169964 | orchestrator | 2026-03-11 00:43:14.169975 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:14.169985 | orchestrator | Wednesday 11 March 2026 00:43:13 +0000 (0:00:00.157) 0:00:06.413 ******* 2026-03-11 00:43:14.170094 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:14.170119 | orchestrator | 2026-03-11 00:43:14.170146 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:21.475734 | orchestrator | Wednesday 11 March 2026 00:43:14 +0000 (0:00:00.167) 0:00:06.580 ******* 2026-03-11 00:43:21.475864 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.475882 | orchestrator | 2026-03-11 00:43:21.475893 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:21.475904 | orchestrator | Wednesday 11 March 2026 00:43:14 +0000 (0:00:00.161) 0:00:06.741 ******* 2026-03-11 00:43:21.475914 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2026-03-11 00:43:21.475924 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2026-03-11 00:43:21.475934 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2026-03-11 00:43:21.475944 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2026-03-11 00:43:21.475953 | orchestrator | 2026-03-11 00:43:21.475963 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:21.475972 | orchestrator | Wednesday 11 March 2026 00:43:15 +0000 (0:00:00.865) 0:00:07.607 ******* 2026-03-11 00:43:21.475982 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.476089 | orchestrator | 2026-03-11 00:43:21.476101 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:21.476111 | orchestrator | Wednesday 11 March 2026 00:43:15 +0000 (0:00:00.174) 0:00:07.781 ******* 2026-03-11 00:43:21.476120 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.476130 | orchestrator | 2026-03-11 00:43:21.476139 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:21.476149 | orchestrator | Wednesday 11 March 2026 00:43:15 +0000 (0:00:00.189) 0:00:07.971 ******* 2026-03-11 00:43:21.476158 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.476167 | orchestrator | 2026-03-11 00:43:21.476177 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:21.476186 | orchestrator | Wednesday 11 March 2026 00:43:15 +0000 (0:00:00.167) 0:00:08.139 ******* 2026-03-11 00:43:21.476196 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.476206 | orchestrator | 2026-03-11 00:43:21.476221 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-11 00:43:21.476238 | orchestrator | Wednesday 11 March 2026 00:43:15 +0000 (0:00:00.177) 0:00:08.316 ******* 2026-03-11 00:43:21.476254 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.476306 | orchestrator | 2026-03-11 00:43:21.476324 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-11 00:43:21.476340 | orchestrator | Wednesday 11 March 2026 00:43:16 +0000 (0:00:00.118) 0:00:08.435 ******* 2026-03-11 00:43:21.476358 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '71564836-6f16-509c-9c2d-06150302b625'}}) 2026-03-11 00:43:21.476375 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '20faa7ec-42ec-56bc-96e8-0b7388032f08'}}) 2026-03-11 00:43:21.476391 | orchestrator | 2026-03-11 00:43:21.476429 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-11 00:43:21.476444 | orchestrator | Wednesday 11 March 2026 00:43:16 +0000 (0:00:00.176) 0:00:08.612 ******* 2026-03-11 00:43:21.476462 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'}) 2026-03-11 00:43:21.476481 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'}) 2026-03-11 00:43:21.476498 | orchestrator | 2026-03-11 00:43:21.476514 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-11 00:43:21.476531 | orchestrator | Wednesday 11 March 2026 00:43:18 +0000 (0:00:01.962) 0:00:10.574 ******* 2026-03-11 00:43:21.476542 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:21.476553 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:21.476562 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.476572 | orchestrator | 2026-03-11 00:43:21.476582 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-11 00:43:21.476591 | orchestrator | Wednesday 11 March 2026 00:43:18 +0000 (0:00:00.137) 0:00:10.712 ******* 2026-03-11 00:43:21.476601 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'}) 2026-03-11 00:43:21.476610 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'}) 2026-03-11 00:43:21.476620 | orchestrator | 2026-03-11 00:43:21.476629 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-11 00:43:21.476638 | orchestrator | Wednesday 11 March 2026 00:43:19 +0000 (0:00:01.443) 0:00:12.155 ******* 2026-03-11 00:43:21.476648 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:21.476657 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:21.476667 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.476676 | orchestrator | 2026-03-11 00:43:21.476685 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-11 00:43:21.476695 | orchestrator | Wednesday 11 March 2026 00:43:19 +0000 (0:00:00.177) 0:00:12.333 ******* 2026-03-11 00:43:21.476724 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.476734 | orchestrator | 2026-03-11 00:43:21.476744 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-11 00:43:21.476753 | orchestrator | Wednesday 11 March 2026 00:43:20 +0000 (0:00:00.123) 0:00:12.456 ******* 2026-03-11 00:43:21.476764 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:21.476781 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:21.476818 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.476839 | orchestrator | 2026-03-11 00:43:21.476854 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-11 00:43:21.476870 | orchestrator | Wednesday 11 March 2026 00:43:20 +0000 (0:00:00.258) 0:00:12.715 ******* 2026-03-11 00:43:21.476884 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.476900 | orchestrator | 2026-03-11 00:43:21.476915 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-11 00:43:21.476932 | orchestrator | Wednesday 11 March 2026 00:43:20 +0000 (0:00:00.113) 0:00:12.828 ******* 2026-03-11 00:43:21.476949 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:21.476964 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:21.476980 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.477026 | orchestrator | 2026-03-11 00:43:21.477037 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-11 00:43:21.477046 | orchestrator | Wednesday 11 March 2026 00:43:20 +0000 (0:00:00.139) 0:00:12.968 ******* 2026-03-11 00:43:21.477056 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.477065 | orchestrator | 2026-03-11 00:43:21.477074 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-11 00:43:21.477084 | orchestrator | Wednesday 11 March 2026 00:43:20 +0000 (0:00:00.125) 0:00:13.094 ******* 2026-03-11 00:43:21.477093 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:21.477104 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:21.477113 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.477122 | orchestrator | 2026-03-11 00:43:21.477132 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-11 00:43:21.477141 | orchestrator | Wednesday 11 March 2026 00:43:20 +0000 (0:00:00.138) 0:00:13.232 ******* 2026-03-11 00:43:21.477151 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:43:21.477161 | orchestrator | 2026-03-11 00:43:21.477170 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-11 00:43:21.477180 | orchestrator | Wednesday 11 March 2026 00:43:20 +0000 (0:00:00.125) 0:00:13.357 ******* 2026-03-11 00:43:21.477189 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:21.477199 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:21.477208 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.477217 | orchestrator | 2026-03-11 00:43:21.477227 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-11 00:43:21.477236 | orchestrator | Wednesday 11 March 2026 00:43:21 +0000 (0:00:00.131) 0:00:13.489 ******* 2026-03-11 00:43:21.477246 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:21.477255 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:21.477265 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.477274 | orchestrator | 2026-03-11 00:43:21.477284 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-11 00:43:21.477302 | orchestrator | Wednesday 11 March 2026 00:43:21 +0000 (0:00:00.140) 0:00:13.630 ******* 2026-03-11 00:43:21.477312 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:21.477321 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:21.477331 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.477340 | orchestrator | 2026-03-11 00:43:21.477350 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-11 00:43:21.477359 | orchestrator | Wednesday 11 March 2026 00:43:21 +0000 (0:00:00.137) 0:00:13.767 ******* 2026-03-11 00:43:21.477369 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:21.477378 | orchestrator | 2026-03-11 00:43:21.477388 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-11 00:43:21.477406 | orchestrator | Wednesday 11 March 2026 00:43:21 +0000 (0:00:00.118) 0:00:13.885 ******* 2026-03-11 00:43:27.062297 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062374 | orchestrator | 2026-03-11 00:43:27.062381 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-11 00:43:27.062386 | orchestrator | Wednesday 11 March 2026 00:43:21 +0000 (0:00:00.121) 0:00:14.007 ******* 2026-03-11 00:43:27.062390 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062394 | orchestrator | 2026-03-11 00:43:27.062399 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-11 00:43:27.062403 | orchestrator | Wednesday 11 March 2026 00:43:21 +0000 (0:00:00.127) 0:00:14.135 ******* 2026-03-11 00:43:27.062407 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 00:43:27.062412 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-11 00:43:27.062416 | orchestrator | } 2026-03-11 00:43:27.062420 | orchestrator | 2026-03-11 00:43:27.062439 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-11 00:43:27.062443 | orchestrator | Wednesday 11 March 2026 00:43:21 +0000 (0:00:00.249) 0:00:14.384 ******* 2026-03-11 00:43:27.062447 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 00:43:27.062452 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-11 00:43:27.062455 | orchestrator | } 2026-03-11 00:43:27.062459 | orchestrator | 2026-03-11 00:43:27.062463 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-11 00:43:27.062467 | orchestrator | Wednesday 11 March 2026 00:43:22 +0000 (0:00:00.135) 0:00:14.520 ******* 2026-03-11 00:43:27.062470 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 00:43:27.062474 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-11 00:43:27.062478 | orchestrator | } 2026-03-11 00:43:27.062482 | orchestrator | 2026-03-11 00:43:27.062519 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-11 00:43:27.062524 | orchestrator | Wednesday 11 March 2026 00:43:22 +0000 (0:00:00.132) 0:00:14.653 ******* 2026-03-11 00:43:27.062528 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:43:27.062532 | orchestrator | 2026-03-11 00:43:27.062536 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-11 00:43:27.062540 | orchestrator | Wednesday 11 March 2026 00:43:22 +0000 (0:00:00.636) 0:00:15.289 ******* 2026-03-11 00:43:27.062544 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:43:27.062548 | orchestrator | 2026-03-11 00:43:27.062551 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-11 00:43:27.062556 | orchestrator | Wednesday 11 March 2026 00:43:23 +0000 (0:00:00.473) 0:00:15.762 ******* 2026-03-11 00:43:27.062559 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:43:27.062563 | orchestrator | 2026-03-11 00:43:27.062567 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-11 00:43:27.062571 | orchestrator | Wednesday 11 March 2026 00:43:23 +0000 (0:00:00.538) 0:00:16.300 ******* 2026-03-11 00:43:27.062574 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:43:27.062578 | orchestrator | 2026-03-11 00:43:27.062618 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-11 00:43:27.062623 | orchestrator | Wednesday 11 March 2026 00:43:24 +0000 (0:00:00.134) 0:00:16.434 ******* 2026-03-11 00:43:27.062627 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062631 | orchestrator | 2026-03-11 00:43:27.062634 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-11 00:43:27.062638 | orchestrator | Wednesday 11 March 2026 00:43:24 +0000 (0:00:00.102) 0:00:16.536 ******* 2026-03-11 00:43:27.062642 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062646 | orchestrator | 2026-03-11 00:43:27.062649 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-11 00:43:27.062653 | orchestrator | Wednesday 11 March 2026 00:43:24 +0000 (0:00:00.095) 0:00:16.632 ******* 2026-03-11 00:43:27.062657 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 00:43:27.062661 | orchestrator |  "vgs_report": { 2026-03-11 00:43:27.062665 | orchestrator |  "vg": [] 2026-03-11 00:43:27.062668 | orchestrator |  } 2026-03-11 00:43:27.062672 | orchestrator | } 2026-03-11 00:43:27.062676 | orchestrator | 2026-03-11 00:43:27.062680 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-11 00:43:27.062683 | orchestrator | Wednesday 11 March 2026 00:43:24 +0000 (0:00:00.133) 0:00:16.765 ******* 2026-03-11 00:43:27.062687 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062691 | orchestrator | 2026-03-11 00:43:27.062694 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-11 00:43:27.062698 | orchestrator | Wednesday 11 March 2026 00:43:24 +0000 (0:00:00.136) 0:00:16.902 ******* 2026-03-11 00:43:27.062702 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062766 | orchestrator | 2026-03-11 00:43:27.062772 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-11 00:43:27.062776 | orchestrator | Wednesday 11 March 2026 00:43:24 +0000 (0:00:00.120) 0:00:17.023 ******* 2026-03-11 00:43:27.062780 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062784 | orchestrator | 2026-03-11 00:43:27.062788 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-11 00:43:27.062791 | orchestrator | Wednesday 11 March 2026 00:43:24 +0000 (0:00:00.246) 0:00:17.269 ******* 2026-03-11 00:43:27.062795 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062799 | orchestrator | 2026-03-11 00:43:27.062803 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-11 00:43:27.062806 | orchestrator | Wednesday 11 March 2026 00:43:24 +0000 (0:00:00.110) 0:00:17.380 ******* 2026-03-11 00:43:27.062810 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062814 | orchestrator | 2026-03-11 00:43:27.062817 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-11 00:43:27.062821 | orchestrator | Wednesday 11 March 2026 00:43:25 +0000 (0:00:00.111) 0:00:17.492 ******* 2026-03-11 00:43:27.062825 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062828 | orchestrator | 2026-03-11 00:43:27.062832 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-11 00:43:27.062836 | orchestrator | Wednesday 11 March 2026 00:43:25 +0000 (0:00:00.117) 0:00:17.609 ******* 2026-03-11 00:43:27.062839 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062843 | orchestrator | 2026-03-11 00:43:27.062847 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-11 00:43:27.062852 | orchestrator | Wednesday 11 March 2026 00:43:25 +0000 (0:00:00.146) 0:00:17.756 ******* 2026-03-11 00:43:27.062868 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062872 | orchestrator | 2026-03-11 00:43:27.062877 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-11 00:43:27.062881 | orchestrator | Wednesday 11 March 2026 00:43:25 +0000 (0:00:00.134) 0:00:17.890 ******* 2026-03-11 00:43:27.062909 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062914 | orchestrator | 2026-03-11 00:43:27.062918 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-11 00:43:27.062951 | orchestrator | Wednesday 11 March 2026 00:43:25 +0000 (0:00:00.118) 0:00:18.009 ******* 2026-03-11 00:43:27.062956 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062960 | orchestrator | 2026-03-11 00:43:27.062964 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-11 00:43:27.062969 | orchestrator | Wednesday 11 March 2026 00:43:25 +0000 (0:00:00.114) 0:00:18.123 ******* 2026-03-11 00:43:27.062973 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.062978 | orchestrator | 2026-03-11 00:43:27.063022 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-11 00:43:27.063028 | orchestrator | Wednesday 11 March 2026 00:43:25 +0000 (0:00:00.111) 0:00:18.235 ******* 2026-03-11 00:43:27.063032 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.063037 | orchestrator | 2026-03-11 00:43:27.063041 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-11 00:43:27.063046 | orchestrator | Wednesday 11 March 2026 00:43:25 +0000 (0:00:00.115) 0:00:18.351 ******* 2026-03-11 00:43:27.063050 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.063054 | orchestrator | 2026-03-11 00:43:27.063058 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-11 00:43:27.063063 | orchestrator | Wednesday 11 March 2026 00:43:26 +0000 (0:00:00.114) 0:00:18.466 ******* 2026-03-11 00:43:27.063067 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.063071 | orchestrator | 2026-03-11 00:43:27.063076 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-11 00:43:27.063080 | orchestrator | Wednesday 11 March 2026 00:43:26 +0000 (0:00:00.133) 0:00:18.600 ******* 2026-03-11 00:43:27.063086 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:27.063092 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:27.063098 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.063104 | orchestrator | 2026-03-11 00:43:27.063110 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-11 00:43:27.063121 | orchestrator | Wednesday 11 March 2026 00:43:26 +0000 (0:00:00.281) 0:00:18.882 ******* 2026-03-11 00:43:27.063127 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:27.063134 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:27.063140 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.063146 | orchestrator | 2026-03-11 00:43:27.063151 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-11 00:43:27.063157 | orchestrator | Wednesday 11 March 2026 00:43:26 +0000 (0:00:00.118) 0:00:19.001 ******* 2026-03-11 00:43:27.063163 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:27.063169 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:27.063175 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.063181 | orchestrator | 2026-03-11 00:43:27.063186 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-11 00:43:27.063192 | orchestrator | Wednesday 11 March 2026 00:43:26 +0000 (0:00:00.128) 0:00:19.129 ******* 2026-03-11 00:43:27.063198 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:27.063205 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:27.063217 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.063221 | orchestrator | 2026-03-11 00:43:27.063225 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-11 00:43:27.063231 | orchestrator | Wednesday 11 March 2026 00:43:26 +0000 (0:00:00.129) 0:00:19.258 ******* 2026-03-11 00:43:27.063238 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:27.063243 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:27.063247 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:27.063251 | orchestrator | 2026-03-11 00:43:27.063255 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-11 00:43:27.063258 | orchestrator | Wednesday 11 March 2026 00:43:27 +0000 (0:00:00.160) 0:00:19.419 ******* 2026-03-11 00:43:27.063290 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:32.882401 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:32.882505 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:32.882518 | orchestrator | 2026-03-11 00:43:32.882530 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-11 00:43:32.882542 | orchestrator | Wednesday 11 March 2026 00:43:27 +0000 (0:00:00.161) 0:00:19.580 ******* 2026-03-11 00:43:32.882552 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:32.882563 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:32.882573 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:32.882583 | orchestrator | 2026-03-11 00:43:32.882593 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-11 00:43:32.882603 | orchestrator | Wednesday 11 March 2026 00:43:27 +0000 (0:00:00.171) 0:00:19.752 ******* 2026-03-11 00:43:32.882612 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:32.882622 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:32.882632 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:32.882642 | orchestrator | 2026-03-11 00:43:32.882652 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-11 00:43:32.882661 | orchestrator | Wednesday 11 March 2026 00:43:27 +0000 (0:00:00.208) 0:00:19.960 ******* 2026-03-11 00:43:32.882671 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:43:32.882682 | orchestrator | 2026-03-11 00:43:32.882692 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-11 00:43:32.882700 | orchestrator | Wednesday 11 March 2026 00:43:28 +0000 (0:00:00.532) 0:00:20.493 ******* 2026-03-11 00:43:32.882710 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:43:32.882720 | orchestrator | 2026-03-11 00:43:32.882730 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-11 00:43:32.882740 | orchestrator | Wednesday 11 March 2026 00:43:28 +0000 (0:00:00.569) 0:00:21.062 ******* 2026-03-11 00:43:32.882750 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:43:32.882759 | orchestrator | 2026-03-11 00:43:32.882769 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-11 00:43:32.882779 | orchestrator | Wednesday 11 March 2026 00:43:28 +0000 (0:00:00.157) 0:00:21.219 ******* 2026-03-11 00:43:32.882812 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'vg_name': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'}) 2026-03-11 00:43:32.882823 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'vg_name': 'ceph-71564836-6f16-509c-9c2d-06150302b625'}) 2026-03-11 00:43:32.882832 | orchestrator | 2026-03-11 00:43:32.882842 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-11 00:43:32.882852 | orchestrator | Wednesday 11 March 2026 00:43:28 +0000 (0:00:00.171) 0:00:21.390 ******* 2026-03-11 00:43:32.882877 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:32.882887 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:32.882897 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:32.882906 | orchestrator | 2026-03-11 00:43:32.882915 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-11 00:43:32.882925 | orchestrator | Wednesday 11 March 2026 00:43:29 +0000 (0:00:00.455) 0:00:21.846 ******* 2026-03-11 00:43:32.882935 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:32.882946 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:32.882956 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:32.882967 | orchestrator | 2026-03-11 00:43:32.883040 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-11 00:43:32.883051 | orchestrator | Wednesday 11 March 2026 00:43:29 +0000 (0:00:00.153) 0:00:22.000 ******* 2026-03-11 00:43:32.883060 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'})  2026-03-11 00:43:32.883069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'})  2026-03-11 00:43:32.883079 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:43:32.883087 | orchestrator | 2026-03-11 00:43:32.883096 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-11 00:43:32.883105 | orchestrator | Wednesday 11 March 2026 00:43:29 +0000 (0:00:00.178) 0:00:22.178 ******* 2026-03-11 00:43:32.883130 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 00:43:32.883138 | orchestrator |  "lvm_report": { 2026-03-11 00:43:32.883148 | orchestrator |  "lv": [ 2026-03-11 00:43:32.883156 | orchestrator |  { 2026-03-11 00:43:32.883164 | orchestrator |  "lv_name": "osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08", 2026-03-11 00:43:32.883174 | orchestrator |  "vg_name": "ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08" 2026-03-11 00:43:32.883183 | orchestrator |  }, 2026-03-11 00:43:32.883191 | orchestrator |  { 2026-03-11 00:43:32.883200 | orchestrator |  "lv_name": "osd-block-71564836-6f16-509c-9c2d-06150302b625", 2026-03-11 00:43:32.883208 | orchestrator |  "vg_name": "ceph-71564836-6f16-509c-9c2d-06150302b625" 2026-03-11 00:43:32.883215 | orchestrator |  } 2026-03-11 00:43:32.883223 | orchestrator |  ], 2026-03-11 00:43:32.883231 | orchestrator |  "pv": [ 2026-03-11 00:43:32.883238 | orchestrator |  { 2026-03-11 00:43:32.883246 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-11 00:43:32.883254 | orchestrator |  "vg_name": "ceph-71564836-6f16-509c-9c2d-06150302b625" 2026-03-11 00:43:32.883262 | orchestrator |  }, 2026-03-11 00:43:32.883270 | orchestrator |  { 2026-03-11 00:43:32.883288 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-11 00:43:32.883296 | orchestrator |  "vg_name": "ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08" 2026-03-11 00:43:32.883304 | orchestrator |  } 2026-03-11 00:43:32.883313 | orchestrator |  ] 2026-03-11 00:43:32.883322 | orchestrator |  } 2026-03-11 00:43:32.883330 | orchestrator | } 2026-03-11 00:43:32.883338 | orchestrator | 2026-03-11 00:43:32.883348 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-11 00:43:32.883357 | orchestrator | 2026-03-11 00:43:32.883365 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-11 00:43:32.883374 | orchestrator | Wednesday 11 March 2026 00:43:30 +0000 (0:00:00.335) 0:00:22.514 ******* 2026-03-11 00:43:32.883382 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2026-03-11 00:43:32.883390 | orchestrator | 2026-03-11 00:43:32.883398 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-11 00:43:32.883405 | orchestrator | Wednesday 11 March 2026 00:43:30 +0000 (0:00:00.319) 0:00:22.833 ******* 2026-03-11 00:43:32.883414 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:43:32.883422 | orchestrator | 2026-03-11 00:43:32.883431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:32.883439 | orchestrator | Wednesday 11 March 2026 00:43:30 +0000 (0:00:00.221) 0:00:23.055 ******* 2026-03-11 00:43:32.883455 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2026-03-11 00:43:32.883465 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2026-03-11 00:43:32.883474 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2026-03-11 00:43:32.883482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2026-03-11 00:43:32.883490 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2026-03-11 00:43:32.883497 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2026-03-11 00:43:32.883505 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2026-03-11 00:43:32.883512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2026-03-11 00:43:32.883520 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2026-03-11 00:43:32.883528 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2026-03-11 00:43:32.883537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2026-03-11 00:43:32.883545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2026-03-11 00:43:32.883553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2026-03-11 00:43:32.883561 | orchestrator | 2026-03-11 00:43:32.883569 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:32.883577 | orchestrator | Wednesday 11 March 2026 00:43:31 +0000 (0:00:00.398) 0:00:23.453 ******* 2026-03-11 00:43:32.883584 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:32.883592 | orchestrator | 2026-03-11 00:43:32.883600 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:32.883608 | orchestrator | Wednesday 11 March 2026 00:43:31 +0000 (0:00:00.208) 0:00:23.662 ******* 2026-03-11 00:43:32.883615 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:32.883623 | orchestrator | 2026-03-11 00:43:32.883631 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:32.883639 | orchestrator | Wednesday 11 March 2026 00:43:31 +0000 (0:00:00.180) 0:00:23.843 ******* 2026-03-11 00:43:32.883647 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:32.883656 | orchestrator | 2026-03-11 00:43:32.883664 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:32.883682 | orchestrator | Wednesday 11 March 2026 00:43:32 +0000 (0:00:00.671) 0:00:24.514 ******* 2026-03-11 00:43:32.883690 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:32.883699 | orchestrator | 2026-03-11 00:43:32.883708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:32.883716 | orchestrator | Wednesday 11 March 2026 00:43:32 +0000 (0:00:00.260) 0:00:24.774 ******* 2026-03-11 00:43:32.883724 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:32.883733 | orchestrator | 2026-03-11 00:43:32.883742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:32.883750 | orchestrator | Wednesday 11 March 2026 00:43:32 +0000 (0:00:00.283) 0:00:25.058 ******* 2026-03-11 00:43:32.883758 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:32.883767 | orchestrator | 2026-03-11 00:43:32.883787 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:44.062292 | orchestrator | Wednesday 11 March 2026 00:43:32 +0000 (0:00:00.234) 0:00:25.292 ******* 2026-03-11 00:43:44.062404 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.062420 | orchestrator | 2026-03-11 00:43:44.062433 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:44.062444 | orchestrator | Wednesday 11 March 2026 00:43:33 +0000 (0:00:00.234) 0:00:25.527 ******* 2026-03-11 00:43:44.062455 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.062466 | orchestrator | 2026-03-11 00:43:44.062477 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:44.062488 | orchestrator | Wednesday 11 March 2026 00:43:33 +0000 (0:00:00.212) 0:00:25.740 ******* 2026-03-11 00:43:44.062499 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a) 2026-03-11 00:43:44.062511 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a) 2026-03-11 00:43:44.062539 | orchestrator | 2026-03-11 00:43:44.062551 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:44.062562 | orchestrator | Wednesday 11 March 2026 00:43:33 +0000 (0:00:00.478) 0:00:26.218 ******* 2026-03-11 00:43:44.062573 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_eb5be362-3b33-4846-8138-86194f5d1a8a) 2026-03-11 00:43:44.062583 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_eb5be362-3b33-4846-8138-86194f5d1a8a) 2026-03-11 00:43:44.062594 | orchestrator | 2026-03-11 00:43:44.062605 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:44.062615 | orchestrator | Wednesday 11 March 2026 00:43:34 +0000 (0:00:00.482) 0:00:26.701 ******* 2026-03-11 00:43:44.062626 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f36f8e1d-14c5-427c-b242-d446b19c77db) 2026-03-11 00:43:44.062637 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f36f8e1d-14c5-427c-b242-d446b19c77db) 2026-03-11 00:43:44.062647 | orchestrator | 2026-03-11 00:43:44.062658 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:44.062669 | orchestrator | Wednesday 11 March 2026 00:43:34 +0000 (0:00:00.432) 0:00:27.133 ******* 2026-03-11 00:43:44.062708 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_288642ce-5fa9-4bc7-a508-61d675ea6136) 2026-03-11 00:43:44.062721 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_288642ce-5fa9-4bc7-a508-61d675ea6136) 2026-03-11 00:43:44.062731 | orchestrator | 2026-03-11 00:43:44.062742 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:43:44.062753 | orchestrator | Wednesday 11 March 2026 00:43:35 +0000 (0:00:00.610) 0:00:27.744 ******* 2026-03-11 00:43:44.062764 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-11 00:43:44.062774 | orchestrator | 2026-03-11 00:43:44.062785 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.062796 | orchestrator | Wednesday 11 March 2026 00:43:35 +0000 (0:00:00.545) 0:00:28.290 ******* 2026-03-11 00:43:44.062844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2026-03-11 00:43:44.062858 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2026-03-11 00:43:44.062870 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2026-03-11 00:43:44.062883 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2026-03-11 00:43:44.062896 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2026-03-11 00:43:44.062908 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2026-03-11 00:43:44.062920 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2026-03-11 00:43:44.062932 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2026-03-11 00:43:44.062945 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2026-03-11 00:43:44.062980 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2026-03-11 00:43:44.062993 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2026-03-11 00:43:44.063006 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2026-03-11 00:43:44.063018 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2026-03-11 00:43:44.063031 | orchestrator | 2026-03-11 00:43:44.063044 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.063056 | orchestrator | Wednesday 11 March 2026 00:43:36 +0000 (0:00:00.791) 0:00:29.082 ******* 2026-03-11 00:43:44.063068 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063081 | orchestrator | 2026-03-11 00:43:44.063093 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.063106 | orchestrator | Wednesday 11 March 2026 00:43:36 +0000 (0:00:00.187) 0:00:29.270 ******* 2026-03-11 00:43:44.063119 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063131 | orchestrator | 2026-03-11 00:43:44.063143 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.063166 | orchestrator | Wednesday 11 March 2026 00:43:37 +0000 (0:00:00.207) 0:00:29.477 ******* 2026-03-11 00:43:44.063177 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063188 | orchestrator | 2026-03-11 00:43:44.063216 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.063227 | orchestrator | Wednesday 11 March 2026 00:43:37 +0000 (0:00:00.193) 0:00:29.670 ******* 2026-03-11 00:43:44.063250 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063261 | orchestrator | 2026-03-11 00:43:44.063272 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.063283 | orchestrator | Wednesday 11 March 2026 00:43:37 +0000 (0:00:00.172) 0:00:29.843 ******* 2026-03-11 00:43:44.063293 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063304 | orchestrator | 2026-03-11 00:43:44.063315 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.063326 | orchestrator | Wednesday 11 March 2026 00:43:37 +0000 (0:00:00.175) 0:00:30.019 ******* 2026-03-11 00:43:44.063337 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063347 | orchestrator | 2026-03-11 00:43:44.063358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.063369 | orchestrator | Wednesday 11 March 2026 00:43:37 +0000 (0:00:00.199) 0:00:30.218 ******* 2026-03-11 00:43:44.063380 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063390 | orchestrator | 2026-03-11 00:43:44.063401 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.063412 | orchestrator | Wednesday 11 March 2026 00:43:37 +0000 (0:00:00.188) 0:00:30.407 ******* 2026-03-11 00:43:44.063430 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063442 | orchestrator | 2026-03-11 00:43:44.063452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.063463 | orchestrator | Wednesday 11 March 2026 00:43:38 +0000 (0:00:00.206) 0:00:30.614 ******* 2026-03-11 00:43:44.063474 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2026-03-11 00:43:44.063485 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2026-03-11 00:43:44.063496 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2026-03-11 00:43:44.063507 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2026-03-11 00:43:44.063517 | orchestrator | 2026-03-11 00:43:44.063528 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.063551 | orchestrator | Wednesday 11 March 2026 00:43:39 +0000 (0:00:00.892) 0:00:31.506 ******* 2026-03-11 00:43:44.063562 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063573 | orchestrator | 2026-03-11 00:43:44.063584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.063595 | orchestrator | Wednesday 11 March 2026 00:43:39 +0000 (0:00:00.228) 0:00:31.735 ******* 2026-03-11 00:43:44.063606 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063616 | orchestrator | 2026-03-11 00:43:44.063627 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.063646 | orchestrator | Wednesday 11 March 2026 00:43:40 +0000 (0:00:00.798) 0:00:32.533 ******* 2026-03-11 00:43:44.063658 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063669 | orchestrator | 2026-03-11 00:43:44.063680 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:43:44.063690 | orchestrator | Wednesday 11 March 2026 00:43:40 +0000 (0:00:00.202) 0:00:32.736 ******* 2026-03-11 00:43:44.063701 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063712 | orchestrator | 2026-03-11 00:43:44.063722 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-11 00:43:44.063733 | orchestrator | Wednesday 11 March 2026 00:43:40 +0000 (0:00:00.201) 0:00:32.938 ******* 2026-03-11 00:43:44.063744 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063755 | orchestrator | 2026-03-11 00:43:44.063765 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-11 00:43:44.063776 | orchestrator | Wednesday 11 March 2026 00:43:40 +0000 (0:00:00.137) 0:00:33.076 ******* 2026-03-11 00:43:44.063787 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2fb06152-6c58-5f9b-bb14-a51d715c3982'}}) 2026-03-11 00:43:44.063798 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '2e0b0e2c-c482-530c-847f-054ffec93e8e'}}) 2026-03-11 00:43:44.063809 | orchestrator | 2026-03-11 00:43:44.063819 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-11 00:43:44.063830 | orchestrator | Wednesday 11 March 2026 00:43:40 +0000 (0:00:00.190) 0:00:33.267 ******* 2026-03-11 00:43:44.063842 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'}) 2026-03-11 00:43:44.063853 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'}) 2026-03-11 00:43:44.063864 | orchestrator | 2026-03-11 00:43:44.063874 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-11 00:43:44.063885 | orchestrator | Wednesday 11 March 2026 00:43:42 +0000 (0:00:01.785) 0:00:35.053 ******* 2026-03-11 00:43:44.063896 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:44.063908 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:44.063925 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:44.063936 | orchestrator | 2026-03-11 00:43:44.063947 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-11 00:43:44.063986 | orchestrator | Wednesday 11 March 2026 00:43:42 +0000 (0:00:00.138) 0:00:35.192 ******* 2026-03-11 00:43:44.063998 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'}) 2026-03-11 00:43:44.064016 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'}) 2026-03-11 00:43:49.317053 | orchestrator | 2026-03-11 00:43:49.317168 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-11 00:43:49.317179 | orchestrator | Wednesday 11 March 2026 00:43:44 +0000 (0:00:01.358) 0:00:36.550 ******* 2026-03-11 00:43:49.317184 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:49.317191 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:49.317196 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317201 | orchestrator | 2026-03-11 00:43:49.317239 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-11 00:43:49.317244 | orchestrator | Wednesday 11 March 2026 00:43:44 +0000 (0:00:00.124) 0:00:36.674 ******* 2026-03-11 00:43:49.317248 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317253 | orchestrator | 2026-03-11 00:43:49.317258 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-11 00:43:49.317263 | orchestrator | Wednesday 11 March 2026 00:43:44 +0000 (0:00:00.132) 0:00:36.807 ******* 2026-03-11 00:43:49.317268 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:49.317273 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:49.317278 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317282 | orchestrator | 2026-03-11 00:43:49.317286 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-11 00:43:49.317290 | orchestrator | Wednesday 11 March 2026 00:43:44 +0000 (0:00:00.134) 0:00:36.941 ******* 2026-03-11 00:43:49.317294 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317298 | orchestrator | 2026-03-11 00:43:49.317302 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-11 00:43:49.317324 | orchestrator | Wednesday 11 March 2026 00:43:44 +0000 (0:00:00.131) 0:00:37.072 ******* 2026-03-11 00:43:49.317329 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:49.317333 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:49.317337 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317341 | orchestrator | 2026-03-11 00:43:49.317345 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-11 00:43:49.317350 | orchestrator | Wednesday 11 March 2026 00:43:44 +0000 (0:00:00.269) 0:00:37.342 ******* 2026-03-11 00:43:49.317354 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317358 | orchestrator | 2026-03-11 00:43:49.317362 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-11 00:43:49.317366 | orchestrator | Wednesday 11 March 2026 00:43:45 +0000 (0:00:00.125) 0:00:37.468 ******* 2026-03-11 00:43:49.317370 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:49.317389 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:49.317394 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317398 | orchestrator | 2026-03-11 00:43:49.317402 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-11 00:43:49.317407 | orchestrator | Wednesday 11 March 2026 00:43:45 +0000 (0:00:00.169) 0:00:37.638 ******* 2026-03-11 00:43:49.317411 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:43:49.317416 | orchestrator | 2026-03-11 00:43:49.317420 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-11 00:43:49.317424 | orchestrator | Wednesday 11 March 2026 00:43:45 +0000 (0:00:00.128) 0:00:37.766 ******* 2026-03-11 00:43:49.317428 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:49.317432 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:49.317437 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317441 | orchestrator | 2026-03-11 00:43:49.317445 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-11 00:43:49.317449 | orchestrator | Wednesday 11 March 2026 00:43:45 +0000 (0:00:00.137) 0:00:37.903 ******* 2026-03-11 00:43:49.317453 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:49.317457 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:49.317462 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317466 | orchestrator | 2026-03-11 00:43:49.317470 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-11 00:43:49.317487 | orchestrator | Wednesday 11 March 2026 00:43:45 +0000 (0:00:00.135) 0:00:38.039 ******* 2026-03-11 00:43:49.317491 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:49.317495 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:49.317499 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317503 | orchestrator | 2026-03-11 00:43:49.317508 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-11 00:43:49.317512 | orchestrator | Wednesday 11 March 2026 00:43:45 +0000 (0:00:00.130) 0:00:38.170 ******* 2026-03-11 00:43:49.317516 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317520 | orchestrator | 2026-03-11 00:43:49.317524 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-11 00:43:49.317528 | orchestrator | Wednesday 11 March 2026 00:43:45 +0000 (0:00:00.122) 0:00:38.292 ******* 2026-03-11 00:43:49.317532 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317536 | orchestrator | 2026-03-11 00:43:49.317540 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-11 00:43:49.317545 | orchestrator | Wednesday 11 March 2026 00:43:45 +0000 (0:00:00.122) 0:00:38.414 ******* 2026-03-11 00:43:49.317549 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317553 | orchestrator | 2026-03-11 00:43:49.317557 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-11 00:43:49.317561 | orchestrator | Wednesday 11 March 2026 00:43:46 +0000 (0:00:00.123) 0:00:38.537 ******* 2026-03-11 00:43:49.317565 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 00:43:49.317569 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-11 00:43:49.317577 | orchestrator | } 2026-03-11 00:43:49.317583 | orchestrator | 2026-03-11 00:43:49.317587 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-11 00:43:49.317592 | orchestrator | Wednesday 11 March 2026 00:43:46 +0000 (0:00:00.130) 0:00:38.668 ******* 2026-03-11 00:43:49.317597 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 00:43:49.317601 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-11 00:43:49.317606 | orchestrator | } 2026-03-11 00:43:49.317611 | orchestrator | 2026-03-11 00:43:49.317618 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-11 00:43:49.317623 | orchestrator | Wednesday 11 March 2026 00:43:46 +0000 (0:00:00.119) 0:00:38.788 ******* 2026-03-11 00:43:49.317628 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 00:43:49.317633 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-11 00:43:49.317637 | orchestrator | } 2026-03-11 00:43:49.317642 | orchestrator | 2026-03-11 00:43:49.317647 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-11 00:43:49.317652 | orchestrator | Wednesday 11 March 2026 00:43:46 +0000 (0:00:00.255) 0:00:39.043 ******* 2026-03-11 00:43:49.317657 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:43:49.317662 | orchestrator | 2026-03-11 00:43:49.317667 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-11 00:43:49.317671 | orchestrator | Wednesday 11 March 2026 00:43:47 +0000 (0:00:00.534) 0:00:39.577 ******* 2026-03-11 00:43:49.317676 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:43:49.317681 | orchestrator | 2026-03-11 00:43:49.317686 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-11 00:43:49.317690 | orchestrator | Wednesday 11 March 2026 00:43:47 +0000 (0:00:00.519) 0:00:40.096 ******* 2026-03-11 00:43:49.317695 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:43:49.317700 | orchestrator | 2026-03-11 00:43:49.317704 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-11 00:43:49.317709 | orchestrator | Wednesday 11 March 2026 00:43:48 +0000 (0:00:00.561) 0:00:40.658 ******* 2026-03-11 00:43:49.317714 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:43:49.317719 | orchestrator | 2026-03-11 00:43:49.317723 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-11 00:43:49.317728 | orchestrator | Wednesday 11 March 2026 00:43:48 +0000 (0:00:00.161) 0:00:40.820 ******* 2026-03-11 00:43:49.317732 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317737 | orchestrator | 2026-03-11 00:43:49.317741 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-11 00:43:49.317746 | orchestrator | Wednesday 11 March 2026 00:43:48 +0000 (0:00:00.111) 0:00:40.931 ******* 2026-03-11 00:43:49.317751 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317756 | orchestrator | 2026-03-11 00:43:49.317760 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-11 00:43:49.317765 | orchestrator | Wednesday 11 March 2026 00:43:48 +0000 (0:00:00.118) 0:00:41.050 ******* 2026-03-11 00:43:49.317770 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 00:43:49.317775 | orchestrator |  "vgs_report": { 2026-03-11 00:43:49.317780 | orchestrator |  "vg": [] 2026-03-11 00:43:49.317784 | orchestrator |  } 2026-03-11 00:43:49.317789 | orchestrator | } 2026-03-11 00:43:49.317794 | orchestrator | 2026-03-11 00:43:49.317799 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-11 00:43:49.317803 | orchestrator | Wednesday 11 March 2026 00:43:48 +0000 (0:00:00.150) 0:00:41.200 ******* 2026-03-11 00:43:49.317808 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317813 | orchestrator | 2026-03-11 00:43:49.317818 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-11 00:43:49.317823 | orchestrator | Wednesday 11 March 2026 00:43:48 +0000 (0:00:00.124) 0:00:41.325 ******* 2026-03-11 00:43:49.317827 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317832 | orchestrator | 2026-03-11 00:43:49.317837 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-11 00:43:49.317845 | orchestrator | Wednesday 11 March 2026 00:43:49 +0000 (0:00:00.137) 0:00:41.462 ******* 2026-03-11 00:43:49.317850 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317854 | orchestrator | 2026-03-11 00:43:49.317859 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-11 00:43:49.317864 | orchestrator | Wednesday 11 March 2026 00:43:49 +0000 (0:00:00.125) 0:00:41.588 ******* 2026-03-11 00:43:49.317868 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:49.317873 | orchestrator | 2026-03-11 00:43:49.317881 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-11 00:43:54.260731 | orchestrator | Wednesday 11 March 2026 00:43:49 +0000 (0:00:00.139) 0:00:41.727 ******* 2026-03-11 00:43:54.260888 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.260929 | orchestrator | 2026-03-11 00:43:54.261002 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-11 00:43:54.261017 | orchestrator | Wednesday 11 March 2026 00:43:49 +0000 (0:00:00.469) 0:00:42.196 ******* 2026-03-11 00:43:54.261027 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261037 | orchestrator | 2026-03-11 00:43:54.261047 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-11 00:43:54.261057 | orchestrator | Wednesday 11 March 2026 00:43:49 +0000 (0:00:00.152) 0:00:42.349 ******* 2026-03-11 00:43:54.261067 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261077 | orchestrator | 2026-03-11 00:43:54.261086 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-11 00:43:54.261096 | orchestrator | Wednesday 11 March 2026 00:43:50 +0000 (0:00:00.139) 0:00:42.489 ******* 2026-03-11 00:43:54.261106 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261115 | orchestrator | 2026-03-11 00:43:54.261125 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-11 00:43:54.261135 | orchestrator | Wednesday 11 March 2026 00:43:50 +0000 (0:00:00.153) 0:00:42.642 ******* 2026-03-11 00:43:54.261146 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261163 | orchestrator | 2026-03-11 00:43:54.261179 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-11 00:43:54.261194 | orchestrator | Wednesday 11 March 2026 00:43:50 +0000 (0:00:00.147) 0:00:42.790 ******* 2026-03-11 00:43:54.261212 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261227 | orchestrator | 2026-03-11 00:43:54.261245 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-11 00:43:54.261262 | orchestrator | Wednesday 11 March 2026 00:43:50 +0000 (0:00:00.135) 0:00:42.925 ******* 2026-03-11 00:43:54.261278 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261293 | orchestrator | 2026-03-11 00:43:54.261304 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-11 00:43:54.261315 | orchestrator | Wednesday 11 March 2026 00:43:50 +0000 (0:00:00.132) 0:00:43.058 ******* 2026-03-11 00:43:54.261326 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261337 | orchestrator | 2026-03-11 00:43:54.261348 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-11 00:43:54.261359 | orchestrator | Wednesday 11 March 2026 00:43:50 +0000 (0:00:00.148) 0:00:43.206 ******* 2026-03-11 00:43:54.261370 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261381 | orchestrator | 2026-03-11 00:43:54.261394 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-11 00:43:54.261405 | orchestrator | Wednesday 11 March 2026 00:43:50 +0000 (0:00:00.152) 0:00:43.359 ******* 2026-03-11 00:43:54.261416 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261427 | orchestrator | 2026-03-11 00:43:54.261438 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-11 00:43:54.261449 | orchestrator | Wednesday 11 March 2026 00:43:51 +0000 (0:00:00.143) 0:00:43.502 ******* 2026-03-11 00:43:54.261462 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:54.261518 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:54.261530 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261542 | orchestrator | 2026-03-11 00:43:54.261554 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-11 00:43:54.261565 | orchestrator | Wednesday 11 March 2026 00:43:51 +0000 (0:00:00.156) 0:00:43.659 ******* 2026-03-11 00:43:54.261576 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:54.261588 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:54.261599 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261610 | orchestrator | 2026-03-11 00:43:54.261621 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-11 00:43:54.261630 | orchestrator | Wednesday 11 March 2026 00:43:51 +0000 (0:00:00.160) 0:00:43.819 ******* 2026-03-11 00:43:54.261640 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:54.261649 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:54.261659 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261668 | orchestrator | 2026-03-11 00:43:54.261678 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-11 00:43:54.261687 | orchestrator | Wednesday 11 March 2026 00:43:51 +0000 (0:00:00.399) 0:00:44.219 ******* 2026-03-11 00:43:54.261697 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:54.261712 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:54.261727 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261743 | orchestrator | 2026-03-11 00:43:54.261781 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-11 00:43:54.261798 | orchestrator | Wednesday 11 March 2026 00:43:51 +0000 (0:00:00.183) 0:00:44.402 ******* 2026-03-11 00:43:54.261815 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:54.261832 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:54.261850 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.261866 | orchestrator | 2026-03-11 00:43:54.261882 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-11 00:43:54.261898 | orchestrator | Wednesday 11 March 2026 00:43:52 +0000 (0:00:00.160) 0:00:44.563 ******* 2026-03-11 00:43:54.261914 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:54.261930 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:54.262002 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.262083 | orchestrator | 2026-03-11 00:43:54.262100 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-11 00:43:54.262118 | orchestrator | Wednesday 11 March 2026 00:43:52 +0000 (0:00:00.167) 0:00:44.730 ******* 2026-03-11 00:43:54.262136 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:54.262175 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:54.262193 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.262210 | orchestrator | 2026-03-11 00:43:54.262228 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-11 00:43:54.262245 | orchestrator | Wednesday 11 March 2026 00:43:52 +0000 (0:00:00.172) 0:00:44.903 ******* 2026-03-11 00:43:54.262264 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:54.262282 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:54.262299 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.262316 | orchestrator | 2026-03-11 00:43:54.262333 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-11 00:43:54.262348 | orchestrator | Wednesday 11 March 2026 00:43:52 +0000 (0:00:00.185) 0:00:45.089 ******* 2026-03-11 00:43:54.262366 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:43:54.262384 | orchestrator | 2026-03-11 00:43:54.262403 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-11 00:43:54.262421 | orchestrator | Wednesday 11 March 2026 00:43:53 +0000 (0:00:00.526) 0:00:45.615 ******* 2026-03-11 00:43:54.262439 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:43:54.262455 | orchestrator | 2026-03-11 00:43:54.262472 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-11 00:43:54.262489 | orchestrator | Wednesday 11 March 2026 00:43:53 +0000 (0:00:00.526) 0:00:46.141 ******* 2026-03-11 00:43:54.262507 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:43:54.262524 | orchestrator | 2026-03-11 00:43:54.262542 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-11 00:43:54.262560 | orchestrator | Wednesday 11 March 2026 00:43:53 +0000 (0:00:00.152) 0:00:46.294 ******* 2026-03-11 00:43:54.262577 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'vg_name': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'}) 2026-03-11 00:43:54.262597 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'vg_name': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'}) 2026-03-11 00:43:54.262615 | orchestrator | 2026-03-11 00:43:54.262632 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-11 00:43:54.262649 | orchestrator | Wednesday 11 March 2026 00:43:54 +0000 (0:00:00.160) 0:00:46.454 ******* 2026-03-11 00:43:54.262668 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:54.262685 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:43:54.262704 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:43:54.262722 | orchestrator | 2026-03-11 00:43:54.262739 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-11 00:43:54.262756 | orchestrator | Wednesday 11 March 2026 00:43:54 +0000 (0:00:00.147) 0:00:46.602 ******* 2026-03-11 00:43:54.262775 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:43:54.262805 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:44:00.315591 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:00.315718 | orchestrator | 2026-03-11 00:44:00.315730 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-11 00:44:00.315739 | orchestrator | Wednesday 11 March 2026 00:43:54 +0000 (0:00:00.148) 0:00:46.750 ******* 2026-03-11 00:44:00.315747 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'})  2026-03-11 00:44:00.315756 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'})  2026-03-11 00:44:00.315763 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:00.315829 | orchestrator | 2026-03-11 00:44:00.315839 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-11 00:44:00.315846 | orchestrator | Wednesday 11 March 2026 00:43:54 +0000 (0:00:00.150) 0:00:46.901 ******* 2026-03-11 00:44:00.315852 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 00:44:00.315859 | orchestrator |  "lvm_report": { 2026-03-11 00:44:00.315867 | orchestrator |  "lv": [ 2026-03-11 00:44:00.315873 | orchestrator |  { 2026-03-11 00:44:00.315879 | orchestrator |  "lv_name": "osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e", 2026-03-11 00:44:00.315886 | orchestrator |  "vg_name": "ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e" 2026-03-11 00:44:00.315893 | orchestrator |  }, 2026-03-11 00:44:00.315899 | orchestrator |  { 2026-03-11 00:44:00.315904 | orchestrator |  "lv_name": "osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982", 2026-03-11 00:44:00.315911 | orchestrator |  "vg_name": "ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982" 2026-03-11 00:44:00.315917 | orchestrator |  } 2026-03-11 00:44:00.315923 | orchestrator |  ], 2026-03-11 00:44:00.315930 | orchestrator |  "pv": [ 2026-03-11 00:44:00.315934 | orchestrator |  { 2026-03-11 00:44:00.315972 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-11 00:44:00.315988 | orchestrator |  "vg_name": "ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982" 2026-03-11 00:44:00.315992 | orchestrator |  }, 2026-03-11 00:44:00.315996 | orchestrator |  { 2026-03-11 00:44:00.316000 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-11 00:44:00.316003 | orchestrator |  "vg_name": "ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e" 2026-03-11 00:44:00.316007 | orchestrator |  } 2026-03-11 00:44:00.316011 | orchestrator |  ] 2026-03-11 00:44:00.316015 | orchestrator |  } 2026-03-11 00:44:00.316019 | orchestrator | } 2026-03-11 00:44:00.316023 | orchestrator | 2026-03-11 00:44:00.316027 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2026-03-11 00:44:00.316031 | orchestrator | 2026-03-11 00:44:00.316035 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2026-03-11 00:44:00.316039 | orchestrator | Wednesday 11 March 2026 00:43:54 +0000 (0:00:00.408) 0:00:47.310 ******* 2026-03-11 00:44:00.316043 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2026-03-11 00:44:00.316047 | orchestrator | 2026-03-11 00:44:00.316050 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2026-03-11 00:44:00.316054 | orchestrator | Wednesday 11 March 2026 00:43:55 +0000 (0:00:00.241) 0:00:47.551 ******* 2026-03-11 00:44:00.316058 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:00.316062 | orchestrator | 2026-03-11 00:44:00.316065 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316069 | orchestrator | Wednesday 11 March 2026 00:43:55 +0000 (0:00:00.214) 0:00:47.766 ******* 2026-03-11 00:44:00.316073 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2026-03-11 00:44:00.316079 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2026-03-11 00:44:00.316086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2026-03-11 00:44:00.316092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2026-03-11 00:44:00.316106 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2026-03-11 00:44:00.316113 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2026-03-11 00:44:00.316119 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2026-03-11 00:44:00.316127 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2026-03-11 00:44:00.316133 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2026-03-11 00:44:00.316143 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2026-03-11 00:44:00.316151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2026-03-11 00:44:00.316157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2026-03-11 00:44:00.316164 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2026-03-11 00:44:00.316170 | orchestrator | 2026-03-11 00:44:00.316176 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316182 | orchestrator | Wednesday 11 March 2026 00:43:55 +0000 (0:00:00.381) 0:00:48.148 ******* 2026-03-11 00:44:00.316188 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:00.316194 | orchestrator | 2026-03-11 00:44:00.316200 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316206 | orchestrator | Wednesday 11 March 2026 00:43:55 +0000 (0:00:00.226) 0:00:48.374 ******* 2026-03-11 00:44:00.316212 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:00.316218 | orchestrator | 2026-03-11 00:44:00.316224 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316247 | orchestrator | Wednesday 11 March 2026 00:43:56 +0000 (0:00:00.196) 0:00:48.571 ******* 2026-03-11 00:44:00.316255 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:00.316261 | orchestrator | 2026-03-11 00:44:00.316267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316273 | orchestrator | Wednesday 11 March 2026 00:43:56 +0000 (0:00:00.178) 0:00:48.749 ******* 2026-03-11 00:44:00.316279 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:00.316285 | orchestrator | 2026-03-11 00:44:00.316291 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316296 | orchestrator | Wednesday 11 March 2026 00:43:56 +0000 (0:00:00.176) 0:00:48.926 ******* 2026-03-11 00:44:00.316302 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:00.316308 | orchestrator | 2026-03-11 00:44:00.316314 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316320 | orchestrator | Wednesday 11 March 2026 00:43:57 +0000 (0:00:00.552) 0:00:49.479 ******* 2026-03-11 00:44:00.316327 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:00.316333 | orchestrator | 2026-03-11 00:44:00.316339 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316345 | orchestrator | Wednesday 11 March 2026 00:43:57 +0000 (0:00:00.186) 0:00:49.665 ******* 2026-03-11 00:44:00.316351 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:00.316357 | orchestrator | 2026-03-11 00:44:00.316363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316369 | orchestrator | Wednesday 11 March 2026 00:43:57 +0000 (0:00:00.209) 0:00:49.875 ******* 2026-03-11 00:44:00.316375 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:00.316381 | orchestrator | 2026-03-11 00:44:00.316387 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316393 | orchestrator | Wednesday 11 March 2026 00:43:57 +0000 (0:00:00.238) 0:00:50.114 ******* 2026-03-11 00:44:00.316402 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6) 2026-03-11 00:44:00.316411 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6) 2026-03-11 00:44:00.316423 | orchestrator | 2026-03-11 00:44:00.316431 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316437 | orchestrator | Wednesday 11 March 2026 00:43:58 +0000 (0:00:00.489) 0:00:50.603 ******* 2026-03-11 00:44:00.316442 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_7fe845d7-e58c-4b3d-846a-c114ba83f0c4) 2026-03-11 00:44:00.316449 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_7fe845d7-e58c-4b3d-846a-c114ba83f0c4) 2026-03-11 00:44:00.316455 | orchestrator | 2026-03-11 00:44:00.316461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316467 | orchestrator | Wednesday 11 March 2026 00:43:58 +0000 (0:00:00.532) 0:00:51.135 ******* 2026-03-11 00:44:00.316473 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_b058385a-4b50-41f2-be6b-aeff7a6e6499) 2026-03-11 00:44:00.316480 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_b058385a-4b50-41f2-be6b-aeff7a6e6499) 2026-03-11 00:44:00.316484 | orchestrator | 2026-03-11 00:44:00.316488 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316492 | orchestrator | Wednesday 11 March 2026 00:43:59 +0000 (0:00:00.481) 0:00:51.617 ******* 2026-03-11 00:44:00.316496 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_fc665229-5891-49fd-b2c5-1ba6ac78c628) 2026-03-11 00:44:00.316500 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_fc665229-5891-49fd-b2c5-1ba6ac78c628) 2026-03-11 00:44:00.316503 | orchestrator | 2026-03-11 00:44:00.316507 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2026-03-11 00:44:00.316511 | orchestrator | Wednesday 11 March 2026 00:43:59 +0000 (0:00:00.439) 0:00:52.057 ******* 2026-03-11 00:44:00.316515 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2026-03-11 00:44:00.316518 | orchestrator | 2026-03-11 00:44:00.316522 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:00.316526 | orchestrator | Wednesday 11 March 2026 00:43:59 +0000 (0:00:00.334) 0:00:52.391 ******* 2026-03-11 00:44:00.316530 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2026-03-11 00:44:00.316534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2026-03-11 00:44:00.316538 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2026-03-11 00:44:00.316542 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2026-03-11 00:44:00.316545 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2026-03-11 00:44:00.316549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2026-03-11 00:44:00.316553 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2026-03-11 00:44:00.316556 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2026-03-11 00:44:00.316560 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2026-03-11 00:44:00.316564 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2026-03-11 00:44:00.316567 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2026-03-11 00:44:00.316578 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2026-03-11 00:44:08.805866 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2026-03-11 00:44:08.806003 | orchestrator | 2026-03-11 00:44:08.806087 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:08.806100 | orchestrator | Wednesday 11 March 2026 00:44:00 +0000 (0:00:00.423) 0:00:52.814 ******* 2026-03-11 00:44:08.806138 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.806151 | orchestrator | 2026-03-11 00:44:08.806163 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:08.806174 | orchestrator | Wednesday 11 March 2026 00:44:00 +0000 (0:00:00.197) 0:00:53.012 ******* 2026-03-11 00:44:08.806184 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.806195 | orchestrator | 2026-03-11 00:44:08.806263 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:08.806276 | orchestrator | Wednesday 11 March 2026 00:44:01 +0000 (0:00:00.706) 0:00:53.719 ******* 2026-03-11 00:44:08.806287 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.806298 | orchestrator | 2026-03-11 00:44:08.806309 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:08.806320 | orchestrator | Wednesday 11 March 2026 00:44:01 +0000 (0:00:00.192) 0:00:53.911 ******* 2026-03-11 00:44:08.806331 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.806342 | orchestrator | 2026-03-11 00:44:08.806353 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:08.806364 | orchestrator | Wednesday 11 March 2026 00:44:01 +0000 (0:00:00.199) 0:00:54.110 ******* 2026-03-11 00:44:08.806375 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.806385 | orchestrator | 2026-03-11 00:44:08.806396 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:08.806409 | orchestrator | Wednesday 11 March 2026 00:44:01 +0000 (0:00:00.181) 0:00:54.292 ******* 2026-03-11 00:44:08.806422 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.806435 | orchestrator | 2026-03-11 00:44:08.806452 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:08.806465 | orchestrator | Wednesday 11 March 2026 00:44:02 +0000 (0:00:00.201) 0:00:54.493 ******* 2026-03-11 00:44:08.806478 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.806490 | orchestrator | 2026-03-11 00:44:08.806502 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:08.806515 | orchestrator | Wednesday 11 March 2026 00:44:02 +0000 (0:00:00.162) 0:00:54.656 ******* 2026-03-11 00:44:08.806527 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.806540 | orchestrator | 2026-03-11 00:44:08.806552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:08.806565 | orchestrator | Wednesday 11 March 2026 00:44:02 +0000 (0:00:00.164) 0:00:54.820 ******* 2026-03-11 00:44:08.806577 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2026-03-11 00:44:08.806590 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2026-03-11 00:44:08.806603 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2026-03-11 00:44:08.806616 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2026-03-11 00:44:08.806629 | orchestrator | 2026-03-11 00:44:08.806641 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:08.806654 | orchestrator | Wednesday 11 March 2026 00:44:02 +0000 (0:00:00.588) 0:00:55.409 ******* 2026-03-11 00:44:08.806666 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.806678 | orchestrator | 2026-03-11 00:44:08.806691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:08.806703 | orchestrator | Wednesday 11 March 2026 00:44:03 +0000 (0:00:00.181) 0:00:55.590 ******* 2026-03-11 00:44:08.806715 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.806727 | orchestrator | 2026-03-11 00:44:08.806740 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:08.806752 | orchestrator | Wednesday 11 March 2026 00:44:03 +0000 (0:00:00.193) 0:00:55.783 ******* 2026-03-11 00:44:08.806765 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.806776 | orchestrator | 2026-03-11 00:44:08.806786 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2026-03-11 00:44:08.806797 | orchestrator | Wednesday 11 March 2026 00:44:03 +0000 (0:00:00.164) 0:00:55.948 ******* 2026-03-11 00:44:08.806816 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.806827 | orchestrator | 2026-03-11 00:44:08.806838 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2026-03-11 00:44:08.806849 | orchestrator | Wednesday 11 March 2026 00:44:03 +0000 (0:00:00.185) 0:00:56.133 ******* 2026-03-11 00:44:08.806859 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.806870 | orchestrator | 2026-03-11 00:44:08.806881 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2026-03-11 00:44:08.806891 | orchestrator | Wednesday 11 March 2026 00:44:03 +0000 (0:00:00.241) 0:00:56.375 ******* 2026-03-11 00:44:08.806902 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'c12a1925-beca-5a04-a9cd-b492500b7146'}}) 2026-03-11 00:44:08.806914 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '75b18a9f-434b-5575-8ed7-e1e8868eceb5'}}) 2026-03-11 00:44:08.806950 | orchestrator | 2026-03-11 00:44:08.806964 | orchestrator | TASK [Create block VGs] ******************************************************** 2026-03-11 00:44:08.806975 | orchestrator | Wednesday 11 March 2026 00:44:04 +0000 (0:00:00.168) 0:00:56.543 ******* 2026-03-11 00:44:08.806987 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'}) 2026-03-11 00:44:08.806999 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'}) 2026-03-11 00:44:08.807009 | orchestrator | 2026-03-11 00:44:08.807020 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2026-03-11 00:44:08.807049 | orchestrator | Wednesday 11 March 2026 00:44:05 +0000 (0:00:01.834) 0:00:58.378 ******* 2026-03-11 00:44:08.807062 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:08.807074 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:08.807085 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.807096 | orchestrator | 2026-03-11 00:44:08.807107 | orchestrator | TASK [Create block LVs] ******************************************************** 2026-03-11 00:44:08.807118 | orchestrator | Wednesday 11 March 2026 00:44:06 +0000 (0:00:00.133) 0:00:58.511 ******* 2026-03-11 00:44:08.807129 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'}) 2026-03-11 00:44:08.807140 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'}) 2026-03-11 00:44:08.807151 | orchestrator | 2026-03-11 00:44:08.807161 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2026-03-11 00:44:08.807172 | orchestrator | Wednesday 11 March 2026 00:44:07 +0000 (0:00:01.396) 0:00:59.907 ******* 2026-03-11 00:44:08.807183 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:08.807194 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:08.807210 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.807221 | orchestrator | 2026-03-11 00:44:08.807232 | orchestrator | TASK [Create DB VGs] *********************************************************** 2026-03-11 00:44:08.807243 | orchestrator | Wednesday 11 March 2026 00:44:07 +0000 (0:00:00.134) 0:01:00.042 ******* 2026-03-11 00:44:08.807254 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.807264 | orchestrator | 2026-03-11 00:44:08.807275 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2026-03-11 00:44:08.807286 | orchestrator | Wednesday 11 March 2026 00:44:07 +0000 (0:00:00.163) 0:01:00.206 ******* 2026-03-11 00:44:08.807304 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:08.807315 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:08.807326 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.807337 | orchestrator | 2026-03-11 00:44:08.807347 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2026-03-11 00:44:08.807358 | orchestrator | Wednesday 11 March 2026 00:44:07 +0000 (0:00:00.146) 0:01:00.352 ******* 2026-03-11 00:44:08.807369 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.807380 | orchestrator | 2026-03-11 00:44:08.807390 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2026-03-11 00:44:08.807401 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.138) 0:01:00.491 ******* 2026-03-11 00:44:08.807412 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:08.807423 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:08.807433 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.807444 | orchestrator | 2026-03-11 00:44:08.807455 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2026-03-11 00:44:08.807466 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.141) 0:01:00.632 ******* 2026-03-11 00:44:08.807477 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.807487 | orchestrator | 2026-03-11 00:44:08.807498 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2026-03-11 00:44:08.807509 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.108) 0:01:00.741 ******* 2026-03-11 00:44:08.807520 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:08.807531 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:08.807542 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:08.807553 | orchestrator | 2026-03-11 00:44:08.807563 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2026-03-11 00:44:08.807574 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.143) 0:01:00.885 ******* 2026-03-11 00:44:08.807585 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:08.807596 | orchestrator | 2026-03-11 00:44:08.807607 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2026-03-11 00:44:08.807618 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.282) 0:01:01.168 ******* 2026-03-11 00:44:08.807635 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:14.310258 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:14.310353 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.310364 | orchestrator | 2026-03-11 00:44:14.310372 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2026-03-11 00:44:14.310380 | orchestrator | Wednesday 11 March 2026 00:44:08 +0000 (0:00:00.140) 0:01:01.309 ******* 2026-03-11 00:44:14.310387 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:14.310393 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:14.310423 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.310430 | orchestrator | 2026-03-11 00:44:14.310437 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2026-03-11 00:44:14.310444 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.145) 0:01:01.455 ******* 2026-03-11 00:44:14.310451 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:14.310458 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:14.310465 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.310471 | orchestrator | 2026-03-11 00:44:14.310478 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2026-03-11 00:44:14.310498 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.149) 0:01:01.605 ******* 2026-03-11 00:44:14.310505 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.310512 | orchestrator | 2026-03-11 00:44:14.310519 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2026-03-11 00:44:14.310526 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.108) 0:01:01.713 ******* 2026-03-11 00:44:14.310533 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.310540 | orchestrator | 2026-03-11 00:44:14.310547 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2026-03-11 00:44:14.310553 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.116) 0:01:01.830 ******* 2026-03-11 00:44:14.310560 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.310567 | orchestrator | 2026-03-11 00:44:14.310574 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2026-03-11 00:44:14.310581 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.110) 0:01:01.940 ******* 2026-03-11 00:44:14.310588 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 00:44:14.310595 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2026-03-11 00:44:14.310602 | orchestrator | } 2026-03-11 00:44:14.310610 | orchestrator | 2026-03-11 00:44:14.310617 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2026-03-11 00:44:14.310624 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.126) 0:01:02.067 ******* 2026-03-11 00:44:14.310631 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 00:44:14.310638 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2026-03-11 00:44:14.310645 | orchestrator | } 2026-03-11 00:44:14.310651 | orchestrator | 2026-03-11 00:44:14.310658 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2026-03-11 00:44:14.310665 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.126) 0:01:02.193 ******* 2026-03-11 00:44:14.310672 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 00:44:14.310679 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2026-03-11 00:44:14.310685 | orchestrator | } 2026-03-11 00:44:14.310692 | orchestrator | 2026-03-11 00:44:14.310698 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2026-03-11 00:44:14.310705 | orchestrator | Wednesday 11 March 2026 00:44:09 +0000 (0:00:00.143) 0:01:02.337 ******* 2026-03-11 00:44:14.310711 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:14.310717 | orchestrator | 2026-03-11 00:44:14.310723 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2026-03-11 00:44:14.310728 | orchestrator | Wednesday 11 March 2026 00:44:10 +0000 (0:00:00.534) 0:01:02.872 ******* 2026-03-11 00:44:14.310734 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:14.310740 | orchestrator | 2026-03-11 00:44:14.310745 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2026-03-11 00:44:14.310752 | orchestrator | Wednesday 11 March 2026 00:44:10 +0000 (0:00:00.524) 0:01:03.396 ******* 2026-03-11 00:44:14.310758 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:14.310772 | orchestrator | 2026-03-11 00:44:14.310778 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2026-03-11 00:44:14.310785 | orchestrator | Wednesday 11 March 2026 00:44:11 +0000 (0:00:00.648) 0:01:04.044 ******* 2026-03-11 00:44:14.310791 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:14.310798 | orchestrator | 2026-03-11 00:44:14.310805 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2026-03-11 00:44:14.310812 | orchestrator | Wednesday 11 March 2026 00:44:11 +0000 (0:00:00.140) 0:01:04.184 ******* 2026-03-11 00:44:14.310818 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.310825 | orchestrator | 2026-03-11 00:44:14.310832 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2026-03-11 00:44:14.310840 | orchestrator | Wednesday 11 March 2026 00:44:11 +0000 (0:00:00.099) 0:01:04.284 ******* 2026-03-11 00:44:14.310847 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.310855 | orchestrator | 2026-03-11 00:44:14.310863 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2026-03-11 00:44:14.310872 | orchestrator | Wednesday 11 March 2026 00:44:11 +0000 (0:00:00.091) 0:01:04.376 ******* 2026-03-11 00:44:14.310879 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 00:44:14.310885 | orchestrator |  "vgs_report": { 2026-03-11 00:44:14.310892 | orchestrator |  "vg": [] 2026-03-11 00:44:14.310914 | orchestrator |  } 2026-03-11 00:44:14.310945 | orchestrator | } 2026-03-11 00:44:14.310951 | orchestrator | 2026-03-11 00:44:14.310957 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2026-03-11 00:44:14.310964 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.131) 0:01:04.507 ******* 2026-03-11 00:44:14.310971 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.310977 | orchestrator | 2026-03-11 00:44:14.310984 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2026-03-11 00:44:14.310990 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.129) 0:01:04.637 ******* 2026-03-11 00:44:14.310997 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311003 | orchestrator | 2026-03-11 00:44:14.311010 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2026-03-11 00:44:14.311016 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.108) 0:01:04.746 ******* 2026-03-11 00:44:14.311023 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311029 | orchestrator | 2026-03-11 00:44:14.311036 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2026-03-11 00:44:14.311043 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.107) 0:01:04.853 ******* 2026-03-11 00:44:14.311050 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311056 | orchestrator | 2026-03-11 00:44:14.311063 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2026-03-11 00:44:14.311070 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.107) 0:01:04.961 ******* 2026-03-11 00:44:14.311076 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311083 | orchestrator | 2026-03-11 00:44:14.311090 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2026-03-11 00:44:14.311096 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.142) 0:01:05.103 ******* 2026-03-11 00:44:14.311103 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311109 | orchestrator | 2026-03-11 00:44:14.311116 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2026-03-11 00:44:14.311122 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.128) 0:01:05.232 ******* 2026-03-11 00:44:14.311129 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311136 | orchestrator | 2026-03-11 00:44:14.311142 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2026-03-11 00:44:14.311149 | orchestrator | Wednesday 11 March 2026 00:44:12 +0000 (0:00:00.129) 0:01:05.361 ******* 2026-03-11 00:44:14.311156 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311163 | orchestrator | 2026-03-11 00:44:14.311170 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2026-03-11 00:44:14.311184 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.270) 0:01:05.632 ******* 2026-03-11 00:44:14.311190 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311197 | orchestrator | 2026-03-11 00:44:14.311203 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2026-03-11 00:44:14.311210 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.129) 0:01:05.761 ******* 2026-03-11 00:44:14.311217 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311223 | orchestrator | 2026-03-11 00:44:14.311230 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2026-03-11 00:44:14.311237 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.125) 0:01:05.887 ******* 2026-03-11 00:44:14.311243 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311250 | orchestrator | 2026-03-11 00:44:14.311257 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2026-03-11 00:44:14.311264 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.126) 0:01:06.013 ******* 2026-03-11 00:44:14.311271 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311277 | orchestrator | 2026-03-11 00:44:14.311284 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2026-03-11 00:44:14.311290 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.128) 0:01:06.142 ******* 2026-03-11 00:44:14.311296 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311303 | orchestrator | 2026-03-11 00:44:14.311310 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2026-03-11 00:44:14.311317 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.127) 0:01:06.270 ******* 2026-03-11 00:44:14.311323 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311330 | orchestrator | 2026-03-11 00:44:14.311337 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2026-03-11 00:44:14.311344 | orchestrator | Wednesday 11 March 2026 00:44:13 +0000 (0:00:00.111) 0:01:06.381 ******* 2026-03-11 00:44:14.311351 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:14.311358 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:14.311365 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311372 | orchestrator | 2026-03-11 00:44:14.311378 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2026-03-11 00:44:14.311385 | orchestrator | Wednesday 11 March 2026 00:44:14 +0000 (0:00:00.157) 0:01:06.539 ******* 2026-03-11 00:44:14.311391 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:14.311398 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:14.311405 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:14.311411 | orchestrator | 2026-03-11 00:44:14.311418 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2026-03-11 00:44:14.311425 | orchestrator | Wednesday 11 March 2026 00:44:14 +0000 (0:00:00.124) 0:01:06.664 ******* 2026-03-11 00:44:14.311439 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:17.248052 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:17.248161 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:17.248175 | orchestrator | 2026-03-11 00:44:17.248186 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2026-03-11 00:44:17.248195 | orchestrator | Wednesday 11 March 2026 00:44:14 +0000 (0:00:00.146) 0:01:06.810 ******* 2026-03-11 00:44:17.248278 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:17.248292 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:17.248301 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:17.248309 | orchestrator | 2026-03-11 00:44:17.248318 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2026-03-11 00:44:17.248327 | orchestrator | Wednesday 11 March 2026 00:44:14 +0000 (0:00:00.137) 0:01:06.948 ******* 2026-03-11 00:44:17.248336 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:17.248349 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:17.248358 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:17.248366 | orchestrator | 2026-03-11 00:44:17.248375 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2026-03-11 00:44:17.248384 | orchestrator | Wednesday 11 March 2026 00:44:14 +0000 (0:00:00.147) 0:01:07.095 ******* 2026-03-11 00:44:17.248392 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:17.248401 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:17.248409 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:17.248418 | orchestrator | 2026-03-11 00:44:17.248427 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2026-03-11 00:44:17.248436 | orchestrator | Wednesday 11 March 2026 00:44:14 +0000 (0:00:00.276) 0:01:07.372 ******* 2026-03-11 00:44:17.248444 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:17.248453 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:17.248461 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:17.248470 | orchestrator | 2026-03-11 00:44:17.248478 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2026-03-11 00:44:17.248487 | orchestrator | Wednesday 11 March 2026 00:44:15 +0000 (0:00:00.155) 0:01:07.527 ******* 2026-03-11 00:44:17.248495 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:17.248504 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:17.248512 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:17.248521 | orchestrator | 2026-03-11 00:44:17.248529 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2026-03-11 00:44:17.248538 | orchestrator | Wednesday 11 March 2026 00:44:15 +0000 (0:00:00.141) 0:01:07.669 ******* 2026-03-11 00:44:17.248548 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:17.248559 | orchestrator | 2026-03-11 00:44:17.248568 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2026-03-11 00:44:17.248579 | orchestrator | Wednesday 11 March 2026 00:44:15 +0000 (0:00:00.584) 0:01:08.253 ******* 2026-03-11 00:44:17.248588 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:17.248598 | orchestrator | 2026-03-11 00:44:17.248608 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2026-03-11 00:44:17.248626 | orchestrator | Wednesday 11 March 2026 00:44:16 +0000 (0:00:00.505) 0:01:08.759 ******* 2026-03-11 00:44:17.248636 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:17.248646 | orchestrator | 2026-03-11 00:44:17.248656 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2026-03-11 00:44:17.248666 | orchestrator | Wednesday 11 March 2026 00:44:16 +0000 (0:00:00.154) 0:01:08.914 ******* 2026-03-11 00:44:17.248676 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'vg_name': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'}) 2026-03-11 00:44:17.248686 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'vg_name': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'}) 2026-03-11 00:44:17.248697 | orchestrator | 2026-03-11 00:44:17.248707 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2026-03-11 00:44:17.248718 | orchestrator | Wednesday 11 March 2026 00:44:16 +0000 (0:00:00.155) 0:01:09.069 ******* 2026-03-11 00:44:17.248742 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:17.248753 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:17.248763 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:17.248773 | orchestrator | 2026-03-11 00:44:17.248783 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2026-03-11 00:44:17.248793 | orchestrator | Wednesday 11 March 2026 00:44:16 +0000 (0:00:00.150) 0:01:09.219 ******* 2026-03-11 00:44:17.248802 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:17.248812 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:17.248822 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:17.248832 | orchestrator | 2026-03-11 00:44:17.248842 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2026-03-11 00:44:17.248852 | orchestrator | Wednesday 11 March 2026 00:44:16 +0000 (0:00:00.158) 0:01:09.378 ******* 2026-03-11 00:44:17.248862 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'})  2026-03-11 00:44:17.248876 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'})  2026-03-11 00:44:17.248887 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:17.248898 | orchestrator | 2026-03-11 00:44:17.248908 | orchestrator | TASK [Print LVM report data] *************************************************** 2026-03-11 00:44:17.248944 | orchestrator | Wednesday 11 March 2026 00:44:17 +0000 (0:00:00.139) 0:01:09.517 ******* 2026-03-11 00:44:17.248959 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 00:44:17.248974 | orchestrator |  "lvm_report": { 2026-03-11 00:44:17.248991 | orchestrator |  "lv": [ 2026-03-11 00:44:17.249006 | orchestrator |  { 2026-03-11 00:44:17.249020 | orchestrator |  "lv_name": "osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5", 2026-03-11 00:44:17.249029 | orchestrator |  "vg_name": "ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5" 2026-03-11 00:44:17.249038 | orchestrator |  }, 2026-03-11 00:44:17.249046 | orchestrator |  { 2026-03-11 00:44:17.249055 | orchestrator |  "lv_name": "osd-block-c12a1925-beca-5a04-a9cd-b492500b7146", 2026-03-11 00:44:17.249063 | orchestrator |  "vg_name": "ceph-c12a1925-beca-5a04-a9cd-b492500b7146" 2026-03-11 00:44:17.249072 | orchestrator |  } 2026-03-11 00:44:17.249081 | orchestrator |  ], 2026-03-11 00:44:17.249089 | orchestrator |  "pv": [ 2026-03-11 00:44:17.249105 | orchestrator |  { 2026-03-11 00:44:17.249113 | orchestrator |  "pv_name": "/dev/sdb", 2026-03-11 00:44:17.249122 | orchestrator |  "vg_name": "ceph-c12a1925-beca-5a04-a9cd-b492500b7146" 2026-03-11 00:44:17.249130 | orchestrator |  }, 2026-03-11 00:44:17.249139 | orchestrator |  { 2026-03-11 00:44:17.249147 | orchestrator |  "pv_name": "/dev/sdc", 2026-03-11 00:44:17.249156 | orchestrator |  "vg_name": "ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5" 2026-03-11 00:44:17.249164 | orchestrator |  } 2026-03-11 00:44:17.249173 | orchestrator |  ] 2026-03-11 00:44:17.249181 | orchestrator |  } 2026-03-11 00:44:17.249190 | orchestrator | } 2026-03-11 00:44:17.249199 | orchestrator | 2026-03-11 00:44:17.249208 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:44:17.249217 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-11 00:44:17.249226 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-11 00:44:17.249234 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2026-03-11 00:44:17.249243 | orchestrator | 2026-03-11 00:44:17.249252 | orchestrator | 2026-03-11 00:44:17.249260 | orchestrator | 2026-03-11 00:44:17.249268 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:44:17.249277 | orchestrator | Wednesday 11 March 2026 00:44:17 +0000 (0:00:00.130) 0:01:09.648 ******* 2026-03-11 00:44:17.249285 | orchestrator | =============================================================================== 2026-03-11 00:44:17.249294 | orchestrator | Create block VGs -------------------------------------------------------- 5.58s 2026-03-11 00:44:17.249302 | orchestrator | Create block LVs -------------------------------------------------------- 4.20s 2026-03-11 00:44:17.249311 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.75s 2026-03-11 00:44:17.249320 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 1.71s 2026-03-11 00:44:17.249328 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.64s 2026-03-11 00:44:17.249336 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.60s 2026-03-11 00:44:17.249345 | orchestrator | Add known partitions to the list of available block devices ------------- 1.58s 2026-03-11 00:44:17.249354 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.52s 2026-03-11 00:44:17.249368 | orchestrator | Add known links to the list of available block devices ------------------ 1.21s 2026-03-11 00:44:17.575390 | orchestrator | Add known partitions to the list of available block devices ------------- 0.89s 2026-03-11 00:44:17.575467 | orchestrator | Print LVM report data --------------------------------------------------- 0.87s 2026-03-11 00:44:17.575475 | orchestrator | Add known partitions to the list of available block devices ------------- 0.87s 2026-03-11 00:44:17.575482 | orchestrator | Add known partitions to the list of available block devices ------------- 0.80s 2026-03-11 00:44:17.575488 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.77s 2026-03-11 00:44:17.575495 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.75s 2026-03-11 00:44:17.575501 | orchestrator | Calculate size needed for LVs on ceph_wal_devices ----------------------- 0.72s 2026-03-11 00:44:17.575508 | orchestrator | Add known partitions to the list of available block devices ------------- 0.71s 2026-03-11 00:44:17.575515 | orchestrator | Add known links to the list of available block devices ------------------ 0.70s 2026-03-11 00:44:17.575521 | orchestrator | Create WAL LVs for ceph_wal_devices ------------------------------------- 0.67s 2026-03-11 00:44:17.575528 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2026-03-11 00:44:29.710506 | orchestrator | 2026-03-11 00:44:29 | INFO  | Prepare task for execution of facts. 2026-03-11 00:44:29.778128 | orchestrator | 2026-03-11 00:44:29 | INFO  | Task c9458cff-0024-4705-99fe-eac15ea96018 (facts) was prepared for execution. 2026-03-11 00:44:29.778244 | orchestrator | 2026-03-11 00:44:29 | INFO  | It takes a moment until task c9458cff-0024-4705-99fe-eac15ea96018 (facts) has been started and output is visible here. 2026-03-11 00:44:41.703834 | orchestrator | 2026-03-11 00:44:41.704050 | orchestrator | PLAY [Apply role facts] ******************************************************** 2026-03-11 00:44:41.704079 | orchestrator | 2026-03-11 00:44:41.704096 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2026-03-11 00:44:41.704114 | orchestrator | Wednesday 11 March 2026 00:44:33 +0000 (0:00:00.281) 0:00:00.281 ******* 2026-03-11 00:44:41.704138 | orchestrator | ok: [testbed-manager] 2026-03-11 00:44:41.704160 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:44:41.704175 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:44:41.704193 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:44:41.704215 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:44:41.704234 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:44:41.704250 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:41.704276 | orchestrator | 2026-03-11 00:44:41.704296 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2026-03-11 00:44:41.704316 | orchestrator | Wednesday 11 March 2026 00:44:35 +0000 (0:00:01.140) 0:00:01.422 ******* 2026-03-11 00:44:41.704337 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:44:41.704361 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:44:41.704381 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:44:41.704399 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:44:41.704419 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:41.704438 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:41.704457 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:41.704475 | orchestrator | 2026-03-11 00:44:41.704498 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2026-03-11 00:44:41.704519 | orchestrator | 2026-03-11 00:44:41.704583 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2026-03-11 00:44:41.704604 | orchestrator | Wednesday 11 March 2026 00:44:36 +0000 (0:00:01.228) 0:00:02.650 ******* 2026-03-11 00:44:41.704623 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:44:41.704640 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:44:41.704658 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:44:41.704678 | orchestrator | ok: [testbed-manager] 2026-03-11 00:44:41.704701 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:44:41.704725 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:44:41.704747 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:44:41.704768 | orchestrator | 2026-03-11 00:44:41.704790 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2026-03-11 00:44:41.704808 | orchestrator | 2026-03-11 00:44:41.704828 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2026-03-11 00:44:41.704846 | orchestrator | Wednesday 11 March 2026 00:44:40 +0000 (0:00:04.671) 0:00:07.322 ******* 2026-03-11 00:44:41.704870 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:44:41.704887 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:44:41.704946 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:44:41.704969 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:44:41.704988 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:44:41.705006 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:44:41.705045 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:44:41.705061 | orchestrator | 2026-03-11 00:44:41.705078 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:44:41.705094 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:44:41.705112 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:44:41.705170 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:44:41.705188 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:44:41.705205 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:44:41.705221 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:44:41.705236 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:44:41.705253 | orchestrator | 2026-03-11 00:44:41.705268 | orchestrator | 2026-03-11 00:44:41.705284 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:44:41.705294 | orchestrator | Wednesday 11 March 2026 00:44:41 +0000 (0:00:00.465) 0:00:07.787 ******* 2026-03-11 00:44:41.705304 | orchestrator | =============================================================================== 2026-03-11 00:44:41.705314 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.67s 2026-03-11 00:44:41.705323 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.23s 2026-03-11 00:44:41.705333 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.14s 2026-03-11 00:44:41.705342 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.47s 2026-03-11 00:44:53.935080 | orchestrator | 2026-03-11 00:44:53 | INFO  | Prepare task for execution of frr. 2026-03-11 00:44:53.998834 | orchestrator | 2026-03-11 00:44:53 | INFO  | Task dc82b8c5-4110-438d-8b56-8e314de55ac6 (frr) was prepared for execution. 2026-03-11 00:44:53.998991 | orchestrator | 2026-03-11 00:44:54 | INFO  | It takes a moment until task dc82b8c5-4110-438d-8b56-8e314de55ac6 (frr) has been started and output is visible here. 2026-03-11 00:45:20.528534 | orchestrator | 2026-03-11 00:45:20.528631 | orchestrator | PLAY [Apply role frr] ********************************************************** 2026-03-11 00:45:20.528642 | orchestrator | 2026-03-11 00:45:20.528650 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2026-03-11 00:45:20.528663 | orchestrator | Wednesday 11 March 2026 00:44:57 +0000 (0:00:00.233) 0:00:00.233 ******* 2026-03-11 00:45:20.528671 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2026-03-11 00:45:20.528680 | orchestrator | 2026-03-11 00:45:20.528687 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2026-03-11 00:45:20.528694 | orchestrator | Wednesday 11 March 2026 00:44:58 +0000 (0:00:00.223) 0:00:00.457 ******* 2026-03-11 00:45:20.528701 | orchestrator | changed: [testbed-manager] 2026-03-11 00:45:20.528709 | orchestrator | 2026-03-11 00:45:20.528716 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2026-03-11 00:45:20.528724 | orchestrator | Wednesday 11 March 2026 00:44:59 +0000 (0:00:01.227) 0:00:01.684 ******* 2026-03-11 00:45:20.528731 | orchestrator | changed: [testbed-manager] 2026-03-11 00:45:20.528738 | orchestrator | 2026-03-11 00:45:20.528745 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2026-03-11 00:45:20.528752 | orchestrator | Wednesday 11 March 2026 00:45:10 +0000 (0:00:10.926) 0:00:12.611 ******* 2026-03-11 00:45:20.528760 | orchestrator | ok: [testbed-manager] 2026-03-11 00:45:20.528769 | orchestrator | 2026-03-11 00:45:20.528777 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2026-03-11 00:45:20.528785 | orchestrator | Wednesday 11 March 2026 00:45:11 +0000 (0:00:00.970) 0:00:13.582 ******* 2026-03-11 00:45:20.528792 | orchestrator | changed: [testbed-manager] 2026-03-11 00:45:20.528824 | orchestrator | 2026-03-11 00:45:20.528832 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2026-03-11 00:45:20.528840 | orchestrator | Wednesday 11 March 2026 00:45:12 +0000 (0:00:00.909) 0:00:14.492 ******* 2026-03-11 00:45:20.528849 | orchestrator | ok: [testbed-manager] 2026-03-11 00:45:20.528856 | orchestrator | 2026-03-11 00:45:20.528912 | orchestrator | TASK [osism.services.frr : Write frr_config_template to temporary file] ******** 2026-03-11 00:45:20.528921 | orchestrator | Wednesday 11 March 2026 00:45:13 +0000 (0:00:01.172) 0:00:15.664 ******* 2026-03-11 00:45:20.528928 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:45:20.528952 | orchestrator | 2026-03-11 00:45:20.528959 | orchestrator | TASK [osism.services.frr : Render frr.conf from frr_config_template variable] *** 2026-03-11 00:45:20.528967 | orchestrator | Wednesday 11 March 2026 00:45:13 +0000 (0:00:00.161) 0:00:15.825 ******* 2026-03-11 00:45:20.528975 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:45:20.528982 | orchestrator | 2026-03-11 00:45:20.528990 | orchestrator | TASK [osism.services.frr : Remove temporary frr_config_template file] ********** 2026-03-11 00:45:20.528997 | orchestrator | Wednesday 11 March 2026 00:45:13 +0000 (0:00:00.146) 0:00:15.972 ******* 2026-03-11 00:45:20.529005 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:45:20.529013 | orchestrator | 2026-03-11 00:45:20.529020 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2026-03-11 00:45:20.529028 | orchestrator | Wednesday 11 March 2026 00:45:13 +0000 (0:00:00.140) 0:00:16.112 ******* 2026-03-11 00:45:20.529036 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:45:20.529043 | orchestrator | 2026-03-11 00:45:20.529051 | orchestrator | TASK [osism.services.frr : Copy frr.conf file from the configuration repository] *** 2026-03-11 00:45:20.529058 | orchestrator | Wednesday 11 March 2026 00:45:13 +0000 (0:00:00.141) 0:00:16.253 ******* 2026-03-11 00:45:20.529066 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:45:20.529073 | orchestrator | 2026-03-11 00:45:20.529081 | orchestrator | TASK [osism.services.frr : Copy default frr.conf file of type k3s_cilium] ****** 2026-03-11 00:45:20.529089 | orchestrator | Wednesday 11 March 2026 00:45:14 +0000 (0:00:00.151) 0:00:16.405 ******* 2026-03-11 00:45:20.529097 | orchestrator | changed: [testbed-manager] 2026-03-11 00:45:20.529105 | orchestrator | 2026-03-11 00:45:20.529111 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2026-03-11 00:45:20.529117 | orchestrator | Wednesday 11 March 2026 00:45:15 +0000 (0:00:01.177) 0:00:17.583 ******* 2026-03-11 00:45:20.529122 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2026-03-11 00:45:20.529128 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2026-03-11 00:45:20.529135 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2026-03-11 00:45:20.529140 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2026-03-11 00:45:20.529145 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2026-03-11 00:45:20.529150 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2026-03-11 00:45:20.529155 | orchestrator | 2026-03-11 00:45:20.529160 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2026-03-11 00:45:20.529165 | orchestrator | Wednesday 11 March 2026 00:45:17 +0000 (0:00:02.297) 0:00:19.881 ******* 2026-03-11 00:45:20.529170 | orchestrator | ok: [testbed-manager] 2026-03-11 00:45:20.529175 | orchestrator | 2026-03-11 00:45:20.529180 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2026-03-11 00:45:20.529186 | orchestrator | Wednesday 11 March 2026 00:45:18 +0000 (0:00:01.207) 0:00:21.089 ******* 2026-03-11 00:45:20.529191 | orchestrator | changed: [testbed-manager] 2026-03-11 00:45:20.529196 | orchestrator | 2026-03-11 00:45:20.529201 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:45:20.529214 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 00:45:20.529221 | orchestrator | 2026-03-11 00:45:20.529226 | orchestrator | 2026-03-11 00:45:20.529248 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:45:20.529252 | orchestrator | Wednesday 11 March 2026 00:45:20 +0000 (0:00:01.426) 0:00:22.516 ******* 2026-03-11 00:45:20.529257 | orchestrator | =============================================================================== 2026-03-11 00:45:20.529261 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.93s 2026-03-11 00:45:20.529266 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.30s 2026-03-11 00:45:20.529270 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.43s 2026-03-11 00:45:20.529275 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.23s 2026-03-11 00:45:20.529279 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.21s 2026-03-11 00:45:20.529284 | orchestrator | osism.services.frr : Copy default frr.conf file of type k3s_cilium ------ 1.18s 2026-03-11 00:45:20.529288 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.17s 2026-03-11 00:45:20.529293 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 0.97s 2026-03-11 00:45:20.529297 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 0.91s 2026-03-11 00:45:20.529301 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.22s 2026-03-11 00:45:20.529306 | orchestrator | osism.services.frr : Write frr_config_template to temporary file -------- 0.16s 2026-03-11 00:45:20.529310 | orchestrator | osism.services.frr : Copy frr.conf file from the configuration repository --- 0.15s 2026-03-11 00:45:20.529315 | orchestrator | osism.services.frr : Render frr.conf from frr_config_template variable --- 0.15s 2026-03-11 00:45:20.529319 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.14s 2026-03-11 00:45:20.529324 | orchestrator | osism.services.frr : Remove temporary frr_config_template file ---------- 0.14s 2026-03-11 00:45:20.834686 | orchestrator | 2026-03-11 00:45:20.837160 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Wed Mar 11 00:45:20 UTC 2026 2026-03-11 00:45:20.837223 | orchestrator | 2026-03-11 00:45:22.829589 | orchestrator | 2026-03-11 00:45:22 | INFO  | Collection nutshell is prepared for execution 2026-03-11 00:45:22.829699 | orchestrator | 2026-03-11 00:45:22 | INFO  | A [0] - dotfiles 2026-03-11 00:45:32.896204 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [0] - homer 2026-03-11 00:45:32.896293 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [0] - netdata 2026-03-11 00:45:32.896305 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [0] - openstackclient 2026-03-11 00:45:32.896414 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [0] - phpmyadmin 2026-03-11 00:45:32.896426 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [0] - common 2026-03-11 00:45:32.900823 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [1] -- loadbalancer 2026-03-11 00:45:32.901078 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [2] --- opensearch 2026-03-11 00:45:32.901100 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [2] --- mariadb-ng 2026-03-11 00:45:32.901107 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [3] ---- horizon 2026-03-11 00:45:32.901263 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [3] ---- keystone 2026-03-11 00:45:32.901527 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [4] ----- neutron 2026-03-11 00:45:32.901771 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [5] ------ wait-for-nova 2026-03-11 00:45:32.902056 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [6] ------- octavia 2026-03-11 00:45:32.904112 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [4] ----- barbican 2026-03-11 00:45:32.904216 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [4] ----- designate 2026-03-11 00:45:32.904232 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [4] ----- ironic 2026-03-11 00:45:32.904387 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [4] ----- placement 2026-03-11 00:45:32.904463 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [4] ----- magnum 2026-03-11 00:45:32.905120 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [1] -- openvswitch 2026-03-11 00:45:32.905370 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [2] --- ovn 2026-03-11 00:45:32.905680 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [1] -- memcached 2026-03-11 00:45:32.905894 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [1] -- redis 2026-03-11 00:45:32.906132 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [1] -- rabbitmq-ng 2026-03-11 00:45:32.906484 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [0] - kubernetes 2026-03-11 00:45:32.908811 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [1] -- kubeconfig 2026-03-11 00:45:32.908971 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [1] -- copy-kubeconfig 2026-03-11 00:45:32.909352 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [0] - ceph 2026-03-11 00:45:32.911714 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [1] -- ceph-pools 2026-03-11 00:45:32.911747 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [2] --- copy-ceph-keys 2026-03-11 00:45:32.911757 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [3] ---- cephclient 2026-03-11 00:45:32.911903 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [4] ----- ceph-bootstrap-dashboard 2026-03-11 00:45:32.912012 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [4] ----- wait-for-keystone 2026-03-11 00:45:32.912343 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [5] ------ kolla-ceph-rgw 2026-03-11 00:45:32.912510 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [5] ------ glance 2026-03-11 00:45:32.912528 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [5] ------ cinder 2026-03-11 00:45:32.912676 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [5] ------ nova 2026-03-11 00:45:32.912923 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [4] ----- prometheus 2026-03-11 00:45:32.913178 | orchestrator | 2026-03-11 00:45:32 | INFO  | A [5] ------ grafana 2026-03-11 00:45:33.113393 | orchestrator | 2026-03-11 00:45:33 | INFO  | All tasks of the collection nutshell are prepared for execution 2026-03-11 00:45:33.113472 | orchestrator | 2026-03-11 00:45:33 | INFO  | Tasks are running in the background 2026-03-11 00:45:35.797242 | orchestrator | 2026-03-11 00:45:35 | INFO  | No task IDs specified, wait for all currently running tasks 2026-03-11 00:45:37.916512 | orchestrator | 2026-03-11 00:45:37 | INFO  | Task da7dba15-de16-4225-8bb5-4dc9c49bdfca is in state STARTED 2026-03-11 00:45:37.916668 | orchestrator | 2026-03-11 00:45:37 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:45:37.918279 | orchestrator | 2026-03-11 00:45:37 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:45:37.918810 | orchestrator | 2026-03-11 00:45:37 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:45:37.920570 | orchestrator | 2026-03-11 00:45:37 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:45:37.921174 | orchestrator | 2026-03-11 00:45:37 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:45:37.922607 | orchestrator | 2026-03-11 00:45:37 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:45:37.922659 | orchestrator | 2026-03-11 00:45:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:45:41.008390 | orchestrator | 2026-03-11 00:45:40 | INFO  | Task da7dba15-de16-4225-8bb5-4dc9c49bdfca is in state STARTED 2026-03-11 00:45:41.008488 | orchestrator | 2026-03-11 00:45:40 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:45:41.008498 | orchestrator | 2026-03-11 00:45:40 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:45:41.008507 | orchestrator | 2026-03-11 00:45:40 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:45:41.008516 | orchestrator | 2026-03-11 00:45:40 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:45:41.008523 | orchestrator | 2026-03-11 00:45:40 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:45:41.008529 | orchestrator | 2026-03-11 00:45:40 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:45:41.008538 | orchestrator | 2026-03-11 00:45:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:45:44.064377 | orchestrator | 2026-03-11 00:45:44 | INFO  | Task da7dba15-de16-4225-8bb5-4dc9c49bdfca is in state STARTED 2026-03-11 00:45:44.067439 | orchestrator | 2026-03-11 00:45:44 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:45:44.069369 | orchestrator | 2026-03-11 00:45:44 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:45:44.072508 | orchestrator | 2026-03-11 00:45:44 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:45:44.075304 | orchestrator | 2026-03-11 00:45:44 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:45:44.075706 | orchestrator | 2026-03-11 00:45:44 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:45:44.078447 | orchestrator | 2026-03-11 00:45:44 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:45:44.078494 | orchestrator | 2026-03-11 00:45:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:45:47.206293 | orchestrator | 2026-03-11 00:45:47 | INFO  | Task da7dba15-de16-4225-8bb5-4dc9c49bdfca is in state STARTED 2026-03-11 00:45:47.209097 | orchestrator | 2026-03-11 00:45:47 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:45:47.209159 | orchestrator | 2026-03-11 00:45:47 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:45:47.209964 | orchestrator | 2026-03-11 00:45:47 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:45:47.209987 | orchestrator | 2026-03-11 00:45:47 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:45:47.210562 | orchestrator | 2026-03-11 00:45:47 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:45:47.214011 | orchestrator | 2026-03-11 00:45:47 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:45:47.214085 | orchestrator | 2026-03-11 00:45:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:45:50.256066 | orchestrator | 2026-03-11 00:45:50 | INFO  | Task da7dba15-de16-4225-8bb5-4dc9c49bdfca is in state STARTED 2026-03-11 00:45:50.256236 | orchestrator | 2026-03-11 00:45:50 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:45:50.258363 | orchestrator | 2026-03-11 00:45:50 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:45:50.258926 | orchestrator | 2026-03-11 00:45:50 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:45:50.260477 | orchestrator | 2026-03-11 00:45:50 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:45:50.263553 | orchestrator | 2026-03-11 00:45:50 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:45:50.264052 | orchestrator | 2026-03-11 00:45:50 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:45:50.264100 | orchestrator | 2026-03-11 00:45:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:45:53.464583 | orchestrator | 2026-03-11 00:45:53 | INFO  | Task da7dba15-de16-4225-8bb5-4dc9c49bdfca is in state STARTED 2026-03-11 00:45:53.464675 | orchestrator | 2026-03-11 00:45:53 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:45:53.465925 | orchestrator | 2026-03-11 00:45:53 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:45:53.468003 | orchestrator | 2026-03-11 00:45:53 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:45:53.468673 | orchestrator | 2026-03-11 00:45:53 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:45:53.469936 | orchestrator | 2026-03-11 00:45:53 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:45:53.472415 | orchestrator | 2026-03-11 00:45:53 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:45:53.472450 | orchestrator | 2026-03-11 00:45:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:45:56.541491 | orchestrator | 2026-03-11 00:45:56 | INFO  | Task da7dba15-de16-4225-8bb5-4dc9c49bdfca is in state STARTED 2026-03-11 00:45:56.545955 | orchestrator | 2026-03-11 00:45:56 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:45:56.546053 | orchestrator | 2026-03-11 00:45:56 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:45:56.546060 | orchestrator | 2026-03-11 00:45:56 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:45:56.546065 | orchestrator | 2026-03-11 00:45:56 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:45:56.546505 | orchestrator | 2026-03-11 00:45:56 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:45:56.548449 | orchestrator | 2026-03-11 00:45:56 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:45:56.548492 | orchestrator | 2026-03-11 00:45:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:45:59.769352 | orchestrator | 2026-03-11 00:45:59.769407 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2026-03-11 00:45:59.769413 | orchestrator | 2026-03-11 00:45:59.769417 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2026-03-11 00:45:59.769422 | orchestrator | Wednesday 11 March 2026 00:45:45 +0000 (0:00:00.254) 0:00:00.254 ******* 2026-03-11 00:45:59.769426 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:45:59.769431 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:45:59.769434 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:45:59.769438 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:45:59.769442 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:45:59.769446 | orchestrator | changed: [testbed-manager] 2026-03-11 00:45:59.769449 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:45:59.769453 | orchestrator | 2026-03-11 00:45:59.769457 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2026-03-11 00:45:59.769461 | orchestrator | Wednesday 11 March 2026 00:45:48 +0000 (0:00:03.200) 0:00:03.455 ******* 2026-03-11 00:45:59.769475 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-11 00:45:59.769479 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-11 00:45:59.769486 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-11 00:45:59.769490 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-11 00:45:59.769494 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-11 00:45:59.769497 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-11 00:45:59.769501 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-11 00:45:59.769505 | orchestrator | 2026-03-11 00:45:59.769509 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2026-03-11 00:45:59.769513 | orchestrator | Wednesday 11 March 2026 00:45:50 +0000 (0:00:02.002) 0:00:05.457 ******* 2026-03-11 00:45:59.769518 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:45:49.360327', 'end': '2026-03-11 00:45:49.363809', 'delta': '0:00:00.003482', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:45:59.769526 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:45:49.552776', 'end': '2026-03-11 00:45:49.560388', 'delta': '0:00:00.007612', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:45:59.769530 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:45:49.378812', 'end': '2026-03-11 00:45:49.385050', 'delta': '0:00:00.006238', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:45:59.769543 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:45:49.668695', 'end': '2026-03-11 00:45:49.675998', 'delta': '0:00:00.007303', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:45:59.769555 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:45:49.805126', 'end': '2026-03-11 00:45:49.810686', 'delta': '0:00:00.005560', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:45:59.769559 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:45:49.953977', 'end': '2026-03-11 00:45:49.961433', 'delta': '0:00:00.007456', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:45:59.769563 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2026-03-11 00:45:50.405429', 'end': '2026-03-11 00:45:50.411490', 'delta': '0:00:00.006061', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2026-03-11 00:45:59.769567 | orchestrator | 2026-03-11 00:45:59.769571 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2026-03-11 00:45:59.769575 | orchestrator | Wednesday 11 March 2026 00:45:52 +0000 (0:00:01.631) 0:00:07.088 ******* 2026-03-11 00:45:59.769579 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2026-03-11 00:45:59.769582 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2026-03-11 00:45:59.769586 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2026-03-11 00:45:59.769590 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2026-03-11 00:45:59.769594 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2026-03-11 00:45:59.769597 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2026-03-11 00:45:59.769601 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2026-03-11 00:45:59.769605 | orchestrator | 2026-03-11 00:45:59.769609 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2026-03-11 00:45:59.769612 | orchestrator | Wednesday 11 March 2026 00:45:53 +0000 (0:00:01.642) 0:00:08.731 ******* 2026-03-11 00:45:59.769616 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2026-03-11 00:45:59.769620 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2026-03-11 00:45:59.769624 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2026-03-11 00:45:59.769630 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2026-03-11 00:45:59.769634 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2026-03-11 00:45:59.769640 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2026-03-11 00:45:59.769647 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2026-03-11 00:45:59.769651 | orchestrator | 2026-03-11 00:45:59.769655 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:45:59.769662 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:45:59.769667 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:45:59.769670 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:45:59.769689 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:45:59.769693 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:45:59.769697 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:45:59.769701 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:45:59.769707 | orchestrator | 2026-03-11 00:45:59.769713 | orchestrator | 2026-03-11 00:45:59.769719 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:45:59.769726 | orchestrator | Wednesday 11 March 2026 00:45:56 +0000 (0:00:03.073) 0:00:11.805 ******* 2026-03-11 00:45:59.769733 | orchestrator | =============================================================================== 2026-03-11 00:45:59.769740 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.20s 2026-03-11 00:45:59.769747 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.07s 2026-03-11 00:45:59.769752 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.00s 2026-03-11 00:45:59.769756 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 1.64s 2026-03-11 00:45:59.769760 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.63s 2026-03-11 00:45:59.769764 | orchestrator | 2026-03-11 00:45:59 | INFO  | Task da7dba15-de16-4225-8bb5-4dc9c49bdfca is in state SUCCESS 2026-03-11 00:45:59.769768 | orchestrator | 2026-03-11 00:45:59 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:45:59.769772 | orchestrator | 2026-03-11 00:45:59 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:45:59.769776 | orchestrator | 2026-03-11 00:45:59 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:45:59.769780 | orchestrator | 2026-03-11 00:45:59 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:45:59.769786 | orchestrator | 2026-03-11 00:45:59 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:45:59.769954 | orchestrator | 2026-03-11 00:45:59 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:45:59.769963 | orchestrator | 2026-03-11 00:45:59 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:45:59.769968 | orchestrator | 2026-03-11 00:45:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:02.971178 | orchestrator | 2026-03-11 00:46:02 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:02.971294 | orchestrator | 2026-03-11 00:46:02 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:46:02.971306 | orchestrator | 2026-03-11 00:46:02 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:46:02.971314 | orchestrator | 2026-03-11 00:46:02 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:02.971323 | orchestrator | 2026-03-11 00:46:02 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:02.971344 | orchestrator | 2026-03-11 00:46:02 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:02.971353 | orchestrator | 2026-03-11 00:46:02 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:02.971361 | orchestrator | 2026-03-11 00:46:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:05.946353 | orchestrator | 2026-03-11 00:46:05 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:05.946504 | orchestrator | 2026-03-11 00:46:05 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:46:05.946529 | orchestrator | 2026-03-11 00:46:05 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:46:05.946549 | orchestrator | 2026-03-11 00:46:05 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:05.946566 | orchestrator | 2026-03-11 00:46:05 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:05.946583 | orchestrator | 2026-03-11 00:46:05 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:05.946599 | orchestrator | 2026-03-11 00:46:05 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:05.946615 | orchestrator | 2026-03-11 00:46:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:09.176127 | orchestrator | 2026-03-11 00:46:09 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:09.176206 | orchestrator | 2026-03-11 00:46:09 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:46:09.176212 | orchestrator | 2026-03-11 00:46:09 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:46:09.176217 | orchestrator | 2026-03-11 00:46:09 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:09.176221 | orchestrator | 2026-03-11 00:46:09 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:09.176225 | orchestrator | 2026-03-11 00:46:09 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:09.176229 | orchestrator | 2026-03-11 00:46:09 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:09.176232 | orchestrator | 2026-03-11 00:46:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:12.305323 | orchestrator | 2026-03-11 00:46:12 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:12.305395 | orchestrator | 2026-03-11 00:46:12 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:46:12.305401 | orchestrator | 2026-03-11 00:46:12 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:46:12.305406 | orchestrator | 2026-03-11 00:46:12 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:12.313743 | orchestrator | 2026-03-11 00:46:12 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:12.313925 | orchestrator | 2026-03-11 00:46:12 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:12.313940 | orchestrator | 2026-03-11 00:46:12 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:12.313948 | orchestrator | 2026-03-11 00:46:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:15.399248 | orchestrator | 2026-03-11 00:46:15 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:15.401222 | orchestrator | 2026-03-11 00:46:15 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:46:15.401506 | orchestrator | 2026-03-11 00:46:15 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:46:15.403385 | orchestrator | 2026-03-11 00:46:15 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:15.405967 | orchestrator | 2026-03-11 00:46:15 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:15.407092 | orchestrator | 2026-03-11 00:46:15 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:15.408106 | orchestrator | 2026-03-11 00:46:15 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:15.408142 | orchestrator | 2026-03-11 00:46:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:18.581514 | orchestrator | 2026-03-11 00:46:18 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:18.581579 | orchestrator | 2026-03-11 00:46:18 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:46:18.581588 | orchestrator | 2026-03-11 00:46:18 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:46:18.581595 | orchestrator | 2026-03-11 00:46:18 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:18.581601 | orchestrator | 2026-03-11 00:46:18 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:18.581607 | orchestrator | 2026-03-11 00:46:18 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:18.581614 | orchestrator | 2026-03-11 00:46:18 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:18.581620 | orchestrator | 2026-03-11 00:46:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:21.756241 | orchestrator | 2026-03-11 00:46:21 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:21.756930 | orchestrator | 2026-03-11 00:46:21 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state STARTED 2026-03-11 00:46:21.758247 | orchestrator | 2026-03-11 00:46:21 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:46:21.759198 | orchestrator | 2026-03-11 00:46:21 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:21.760362 | orchestrator | 2026-03-11 00:46:21 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:21.761071 | orchestrator | 2026-03-11 00:46:21 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:21.762649 | orchestrator | 2026-03-11 00:46:21 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:21.762691 | orchestrator | 2026-03-11 00:46:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:24.858293 | orchestrator | 2026-03-11 00:46:24 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:24.874229 | orchestrator | 2026-03-11 00:46:24 | INFO  | Task 89574f1b-1cc4-45b6-be22-2f4d59e3770c is in state SUCCESS 2026-03-11 00:46:24.892249 | orchestrator | 2026-03-11 00:46:24 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:46:24.916360 | orchestrator | 2026-03-11 00:46:24 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:24.938255 | orchestrator | 2026-03-11 00:46:24 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:24.938330 | orchestrator | 2026-03-11 00:46:24 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:24.938337 | orchestrator | 2026-03-11 00:46:24 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:24.938343 | orchestrator | 2026-03-11 00:46:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:27.969644 | orchestrator | 2026-03-11 00:46:27 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:27.969710 | orchestrator | 2026-03-11 00:46:27 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:46:27.969716 | orchestrator | 2026-03-11 00:46:27 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:27.969720 | orchestrator | 2026-03-11 00:46:27 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:27.969725 | orchestrator | 2026-03-11 00:46:27 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:27.969729 | orchestrator | 2026-03-11 00:46:27 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:27.969733 | orchestrator | 2026-03-11 00:46:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:31.111714 | orchestrator | 2026-03-11 00:46:31 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:31.113251 | orchestrator | 2026-03-11 00:46:31 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state STARTED 2026-03-11 00:46:31.113628 | orchestrator | 2026-03-11 00:46:31 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:31.114355 | orchestrator | 2026-03-11 00:46:31 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:31.114854 | orchestrator | 2026-03-11 00:46:31 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:31.117164 | orchestrator | 2026-03-11 00:46:31 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:31.117199 | orchestrator | 2026-03-11 00:46:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:34.150954 | orchestrator | 2026-03-11 00:46:34 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:34.151016 | orchestrator | 2026-03-11 00:46:34 | INFO  | Task 855256a2-ebfc-49fd-888f-fa646fb84c69 is in state SUCCESS 2026-03-11 00:46:34.152148 | orchestrator | 2026-03-11 00:46:34 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:34.153846 | orchestrator | 2026-03-11 00:46:34 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:34.154838 | orchestrator | 2026-03-11 00:46:34 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:34.156101 | orchestrator | 2026-03-11 00:46:34 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:34.156145 | orchestrator | 2026-03-11 00:46:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:37.195268 | orchestrator | 2026-03-11 00:46:37 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:37.195430 | orchestrator | 2026-03-11 00:46:37 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:37.197170 | orchestrator | 2026-03-11 00:46:37 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:37.197749 | orchestrator | 2026-03-11 00:46:37 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:37.199138 | orchestrator | 2026-03-11 00:46:37 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:37.199213 | orchestrator | 2026-03-11 00:46:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:40.237580 | orchestrator | 2026-03-11 00:46:40 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:40.238652 | orchestrator | 2026-03-11 00:46:40 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:40.240596 | orchestrator | 2026-03-11 00:46:40 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:40.242166 | orchestrator | 2026-03-11 00:46:40 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:40.243605 | orchestrator | 2026-03-11 00:46:40 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:40.243710 | orchestrator | 2026-03-11 00:46:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:43.287310 | orchestrator | 2026-03-11 00:46:43 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:43.294012 | orchestrator | 2026-03-11 00:46:43 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:43.297281 | orchestrator | 2026-03-11 00:46:43 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:43.300749 | orchestrator | 2026-03-11 00:46:43 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:43.303750 | orchestrator | 2026-03-11 00:46:43 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:43.305272 | orchestrator | 2026-03-11 00:46:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:46.349853 | orchestrator | 2026-03-11 00:46:46 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:46.352575 | orchestrator | 2026-03-11 00:46:46 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:46.356851 | orchestrator | 2026-03-11 00:46:46 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:46.361674 | orchestrator | 2026-03-11 00:46:46 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:46.366668 | orchestrator | 2026-03-11 00:46:46 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:46.366727 | orchestrator | 2026-03-11 00:46:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:49.423278 | orchestrator | 2026-03-11 00:46:49 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:49.423383 | orchestrator | 2026-03-11 00:46:49 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:49.423392 | orchestrator | 2026-03-11 00:46:49 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:49.423398 | orchestrator | 2026-03-11 00:46:49 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:49.426159 | orchestrator | 2026-03-11 00:46:49 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:49.426223 | orchestrator | 2026-03-11 00:46:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:52.461300 | orchestrator | 2026-03-11 00:46:52 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:52.462191 | orchestrator | 2026-03-11 00:46:52 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:52.464625 | orchestrator | 2026-03-11 00:46:52 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:52.465494 | orchestrator | 2026-03-11 00:46:52 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:52.467478 | orchestrator | 2026-03-11 00:46:52 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:52.467544 | orchestrator | 2026-03-11 00:46:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:55.524583 | orchestrator | 2026-03-11 00:46:55 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:55.524683 | orchestrator | 2026-03-11 00:46:55 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:55.524692 | orchestrator | 2026-03-11 00:46:55 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:55.526725 | orchestrator | 2026-03-11 00:46:55 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:55.527971 | orchestrator | 2026-03-11 00:46:55 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:55.528027 | orchestrator | 2026-03-11 00:46:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:46:58.620122 | orchestrator | 2026-03-11 00:46:58 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:46:58.620478 | orchestrator | 2026-03-11 00:46:58 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:46:58.621536 | orchestrator | 2026-03-11 00:46:58 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:46:58.622835 | orchestrator | 2026-03-11 00:46:58 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:46:58.624232 | orchestrator | 2026-03-11 00:46:58 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:46:58.624272 | orchestrator | 2026-03-11 00:46:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:01.660827 | orchestrator | 2026-03-11 00:47:01 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:01.662962 | orchestrator | 2026-03-11 00:47:01 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:01.663015 | orchestrator | 2026-03-11 00:47:01 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:01.663787 | orchestrator | 2026-03-11 00:47:01 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state STARTED 2026-03-11 00:47:01.664838 | orchestrator | 2026-03-11 00:47:01 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:47:01.664882 | orchestrator | 2026-03-11 00:47:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:04.694347 | orchestrator | 2026-03-11 00:47:04 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:04.695244 | orchestrator | 2026-03-11 00:47:04 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:04.695313 | orchestrator | 2026-03-11 00:47:04 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:04.695521 | orchestrator | 2026-03-11 00:47:04 | INFO  | Task 4096fd3c-f917-491b-adb1-5cd692bf3b6d is in state SUCCESS 2026-03-11 00:47:04.695976 | orchestrator | 2026-03-11 00:47:04.695999 | orchestrator | 2026-03-11 00:47:04.696003 | orchestrator | PLAY [Apply role homer] ******************************************************** 2026-03-11 00:47:04.696007 | orchestrator | 2026-03-11 00:47:04.696011 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2026-03-11 00:47:04.696014 | orchestrator | Wednesday 11 March 2026 00:45:44 +0000 (0:00:00.485) 0:00:00.485 ******* 2026-03-11 00:47:04.696018 | orchestrator | ok: [testbed-manager] => { 2026-03-11 00:47:04.696030 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2026-03-11 00:47:04.696034 | orchestrator | } 2026-03-11 00:47:04.696038 | orchestrator | 2026-03-11 00:47:04.696041 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2026-03-11 00:47:04.696044 | orchestrator | Wednesday 11 March 2026 00:45:44 +0000 (0:00:00.390) 0:00:00.876 ******* 2026-03-11 00:47:04.696048 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:04.696051 | orchestrator | 2026-03-11 00:47:04.696054 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2026-03-11 00:47:04.696057 | orchestrator | Wednesday 11 March 2026 00:45:46 +0000 (0:00:02.427) 0:00:03.303 ******* 2026-03-11 00:47:04.696061 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2026-03-11 00:47:04.696064 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2026-03-11 00:47:04.696067 | orchestrator | 2026-03-11 00:47:04.696070 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2026-03-11 00:47:04.696074 | orchestrator | Wednesday 11 March 2026 00:45:48 +0000 (0:00:01.485) 0:00:04.788 ******* 2026-03-11 00:47:04.696077 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:04.696080 | orchestrator | 2026-03-11 00:47:04.696084 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2026-03-11 00:47:04.696087 | orchestrator | Wednesday 11 March 2026 00:45:51 +0000 (0:00:02.673) 0:00:07.462 ******* 2026-03-11 00:47:04.696090 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:04.696093 | orchestrator | 2026-03-11 00:47:04.696096 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2026-03-11 00:47:04.696099 | orchestrator | Wednesday 11 March 2026 00:45:52 +0000 (0:00:01.632) 0:00:09.094 ******* 2026-03-11 00:47:04.696103 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2026-03-11 00:47:04.696106 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:04.696110 | orchestrator | 2026-03-11 00:47:04.696113 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2026-03-11 00:47:04.696116 | orchestrator | Wednesday 11 March 2026 00:46:19 +0000 (0:00:26.682) 0:00:35.777 ******* 2026-03-11 00:47:04.696119 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:04.696122 | orchestrator | 2026-03-11 00:47:04.696125 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:47:04.696129 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:47:04.696133 | orchestrator | 2026-03-11 00:47:04.696136 | orchestrator | 2026-03-11 00:47:04.696139 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:47:04.696143 | orchestrator | Wednesday 11 March 2026 00:46:22 +0000 (0:00:02.928) 0:00:38.705 ******* 2026-03-11 00:47:04.696146 | orchestrator | =============================================================================== 2026-03-11 00:47:04.696149 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 26.68s 2026-03-11 00:47:04.696152 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.93s 2026-03-11 00:47:04.696155 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.67s 2026-03-11 00:47:04.696174 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.43s 2026-03-11 00:47:04.696178 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.63s 2026-03-11 00:47:04.696181 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.49s 2026-03-11 00:47:04.696184 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.39s 2026-03-11 00:47:04.696187 | orchestrator | 2026-03-11 00:47:04.696191 | orchestrator | 2026-03-11 00:47:04.696194 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2026-03-11 00:47:04.696208 | orchestrator | 2026-03-11 00:47:04.696211 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2026-03-11 00:47:04.696214 | orchestrator | Wednesday 11 March 2026 00:45:44 +0000 (0:00:00.445) 0:00:00.445 ******* 2026-03-11 00:47:04.696218 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2026-03-11 00:47:04.696222 | orchestrator | 2026-03-11 00:47:04.696225 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2026-03-11 00:47:04.696228 | orchestrator | Wednesday 11 March 2026 00:45:44 +0000 (0:00:00.296) 0:00:00.741 ******* 2026-03-11 00:47:04.696231 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2026-03-11 00:47:04.696234 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2026-03-11 00:47:04.696238 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2026-03-11 00:47:04.696241 | orchestrator | 2026-03-11 00:47:04.696244 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2026-03-11 00:47:04.696247 | orchestrator | Wednesday 11 March 2026 00:45:47 +0000 (0:00:02.418) 0:00:03.160 ******* 2026-03-11 00:47:04.696251 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:04.696254 | orchestrator | 2026-03-11 00:47:04.696257 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2026-03-11 00:47:04.696260 | orchestrator | Wednesday 11 March 2026 00:45:49 +0000 (0:00:02.009) 0:00:05.169 ******* 2026-03-11 00:47:04.696269 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2026-03-11 00:47:04.696274 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:04.696279 | orchestrator | 2026-03-11 00:47:04.696285 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2026-03-11 00:47:04.696290 | orchestrator | Wednesday 11 March 2026 00:46:26 +0000 (0:00:37.051) 0:00:42.220 ******* 2026-03-11 00:47:04.696298 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:04.696304 | orchestrator | 2026-03-11 00:47:04.696309 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2026-03-11 00:47:04.696314 | orchestrator | Wednesday 11 March 2026 00:46:27 +0000 (0:00:01.721) 0:00:43.942 ******* 2026-03-11 00:47:04.696319 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:04.696392 | orchestrator | 2026-03-11 00:47:04.696400 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2026-03-11 00:47:04.696406 | orchestrator | Wednesday 11 March 2026 00:46:28 +0000 (0:00:00.824) 0:00:44.766 ******* 2026-03-11 00:47:04.696410 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:04.696423 | orchestrator | 2026-03-11 00:47:04.696433 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2026-03-11 00:47:04.696439 | orchestrator | Wednesday 11 March 2026 00:46:30 +0000 (0:00:01.838) 0:00:46.604 ******* 2026-03-11 00:47:04.696444 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:04.696450 | orchestrator | 2026-03-11 00:47:04.696455 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2026-03-11 00:47:04.696460 | orchestrator | Wednesday 11 March 2026 00:46:31 +0000 (0:00:00.786) 0:00:47.391 ******* 2026-03-11 00:47:04.696466 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:04.696472 | orchestrator | 2026-03-11 00:47:04.696477 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2026-03-11 00:47:04.696489 | orchestrator | Wednesday 11 March 2026 00:46:31 +0000 (0:00:00.601) 0:00:47.993 ******* 2026-03-11 00:47:04.696492 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:04.696496 | orchestrator | 2026-03-11 00:47:04.696499 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:47:04.696502 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:47:04.696507 | orchestrator | 2026-03-11 00:47:04.696513 | orchestrator | 2026-03-11 00:47:04.696518 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:47:04.696524 | orchestrator | Wednesday 11 March 2026 00:46:33 +0000 (0:00:01.171) 0:00:49.164 ******* 2026-03-11 00:47:04.696529 | orchestrator | =============================================================================== 2026-03-11 00:47:04.696534 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.05s 2026-03-11 00:47:04.696540 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.42s 2026-03-11 00:47:04.696545 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 2.01s 2026-03-11 00:47:04.696550 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 1.84s 2026-03-11 00:47:04.696555 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.72s 2026-03-11 00:47:04.696561 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 1.17s 2026-03-11 00:47:04.696566 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 0.82s 2026-03-11 00:47:04.696571 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 0.79s 2026-03-11 00:47:04.696602 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.60s 2026-03-11 00:47:04.696609 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.30s 2026-03-11 00:47:04.696614 | orchestrator | 2026-03-11 00:47:04.696619 | orchestrator | 2026-03-11 00:47:04.696625 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2026-03-11 00:47:04.696630 | orchestrator | 2026-03-11 00:47:04.696636 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2026-03-11 00:47:04.696642 | orchestrator | Wednesday 11 March 2026 00:46:02 +0000 (0:00:00.255) 0:00:00.255 ******* 2026-03-11 00:47:04.696647 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:04.696652 | orchestrator | 2026-03-11 00:47:04.696658 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2026-03-11 00:47:04.696663 | orchestrator | Wednesday 11 March 2026 00:46:05 +0000 (0:00:02.944) 0:00:03.200 ******* 2026-03-11 00:47:04.696668 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2026-03-11 00:47:04.696674 | orchestrator | 2026-03-11 00:47:04.696677 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2026-03-11 00:47:04.696681 | orchestrator | Wednesday 11 March 2026 00:46:06 +0000 (0:00:00.630) 0:00:03.830 ******* 2026-03-11 00:47:04.696684 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:04.696689 | orchestrator | 2026-03-11 00:47:04.696695 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2026-03-11 00:47:04.696700 | orchestrator | Wednesday 11 March 2026 00:46:08 +0000 (0:00:01.507) 0:00:05.337 ******* 2026-03-11 00:47:04.696706 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2026-03-11 00:47:04.696712 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:04.696717 | orchestrator | 2026-03-11 00:47:04.696723 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2026-03-11 00:47:04.696728 | orchestrator | Wednesday 11 March 2026 00:46:59 +0000 (0:00:51.482) 0:00:56.820 ******* 2026-03-11 00:47:04.696733 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:04.696739 | orchestrator | 2026-03-11 00:47:04.696744 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:47:04.696749 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:47:04.696758 | orchestrator | 2026-03-11 00:47:04.696813 | orchestrator | 2026-03-11 00:47:04.696820 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:47:04.696832 | orchestrator | Wednesday 11 March 2026 00:47:03 +0000 (0:00:03.801) 0:01:00.621 ******* 2026-03-11 00:47:04.696838 | orchestrator | =============================================================================== 2026-03-11 00:47:04.696843 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 51.48s 2026-03-11 00:47:04.696848 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.80s 2026-03-11 00:47:04.696857 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 2.94s 2026-03-11 00:47:04.696862 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.51s 2026-03-11 00:47:04.696868 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.63s 2026-03-11 00:47:04.696873 | orchestrator | 2026-03-11 00:47:04 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:47:04.696878 | orchestrator | 2026-03-11 00:47:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:07.742951 | orchestrator | 2026-03-11 00:47:07 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:07.744267 | orchestrator | 2026-03-11 00:47:07 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:07.745999 | orchestrator | 2026-03-11 00:47:07 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:07.747807 | orchestrator | 2026-03-11 00:47:07 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:47:07.748100 | orchestrator | 2026-03-11 00:47:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:10.791117 | orchestrator | 2026-03-11 00:47:10 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:10.791472 | orchestrator | 2026-03-11 00:47:10 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:10.791943 | orchestrator | 2026-03-11 00:47:10 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:10.795673 | orchestrator | 2026-03-11 00:47:10 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:47:10.795705 | orchestrator | 2026-03-11 00:47:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:13.834000 | orchestrator | 2026-03-11 00:47:13 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:13.835628 | orchestrator | 2026-03-11 00:47:13 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:13.837737 | orchestrator | 2026-03-11 00:47:13 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:13.839651 | orchestrator | 2026-03-11 00:47:13 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state STARTED 2026-03-11 00:47:13.839694 | orchestrator | 2026-03-11 00:47:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:16.872624 | orchestrator | 2026-03-11 00:47:16 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:16.875183 | orchestrator | 2026-03-11 00:47:16 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:16.878678 | orchestrator | 2026-03-11 00:47:16.878734 | orchestrator | 2026-03-11 00:47:16 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:16.878740 | orchestrator | 2026-03-11 00:47:16 | INFO  | Task 2c548a53-4457-4563-bd68-230537635923 is in state SUCCESS 2026-03-11 00:47:16.879437 | orchestrator | 2026-03-11 00:47:16.879466 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:47:16.879473 | orchestrator | 2026-03-11 00:47:16.879480 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:47:16.879486 | orchestrator | Wednesday 11 March 2026 00:45:44 +0000 (0:00:00.761) 0:00:00.761 ******* 2026-03-11 00:47:16.879494 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2026-03-11 00:47:16.879501 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2026-03-11 00:47:16.879507 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2026-03-11 00:47:16.879514 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2026-03-11 00:47:16.879520 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2026-03-11 00:47:16.879527 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2026-03-11 00:47:16.879533 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2026-03-11 00:47:16.879539 | orchestrator | 2026-03-11 00:47:16.879546 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2026-03-11 00:47:16.879552 | orchestrator | 2026-03-11 00:47:16.879559 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2026-03-11 00:47:16.879566 | orchestrator | Wednesday 11 March 2026 00:45:46 +0000 (0:00:01.348) 0:00:02.110 ******* 2026-03-11 00:47:16.879589 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-2, testbed-node-1, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:47:16.879601 | orchestrator | 2026-03-11 00:47:16.879608 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2026-03-11 00:47:16.879614 | orchestrator | Wednesday 11 March 2026 00:45:47 +0000 (0:00:01.804) 0:00:03.914 ******* 2026-03-11 00:47:16.879621 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:47:16.879629 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:16.879642 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:47:16.879648 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:47:16.879655 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:47:16.879662 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:47:16.879668 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:47:16.879674 | orchestrator | 2026-03-11 00:47:16.879681 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2026-03-11 00:47:16.879688 | orchestrator | Wednesday 11 March 2026 00:45:49 +0000 (0:00:02.062) 0:00:05.977 ******* 2026-03-11 00:47:16.879694 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:47:16.879701 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:47:16.879708 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:47:16.879714 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:47:16.879720 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:16.879727 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:47:16.879734 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:47:16.879740 | orchestrator | 2026-03-11 00:47:16.879747 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2026-03-11 00:47:16.879809 | orchestrator | Wednesday 11 March 2026 00:45:53 +0000 (0:00:03.754) 0:00:09.732 ******* 2026-03-11 00:47:16.879815 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:47:16.879822 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:47:16.879828 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:47:16.879835 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:16.879842 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:47:16.879848 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:47:16.879855 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:47:16.879861 | orchestrator | 2026-03-11 00:47:16.879868 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2026-03-11 00:47:16.879875 | orchestrator | Wednesday 11 March 2026 00:45:56 +0000 (0:00:03.234) 0:00:12.966 ******* 2026-03-11 00:47:16.879892 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:47:16.879898 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:47:16.879905 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:47:16.879911 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:47:16.879917 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:47:16.879922 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:47:16.879929 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:16.879935 | orchestrator | 2026-03-11 00:47:16.879942 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2026-03-11 00:47:16.879949 | orchestrator | Wednesday 11 March 2026 00:46:09 +0000 (0:00:13.044) 0:00:26.010 ******* 2026-03-11 00:47:16.879955 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:47:16.879961 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:47:16.879968 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:47:16.879975 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:16.879981 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:47:16.879988 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:47:16.879994 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:47:16.880001 | orchestrator | 2026-03-11 00:47:16.880007 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2026-03-11 00:47:16.880014 | orchestrator | Wednesday 11 March 2026 00:46:47 +0000 (0:00:37.708) 0:01:03.718 ******* 2026-03-11 00:47:16.880021 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:47:16.880030 | orchestrator | 2026-03-11 00:47:16.880036 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2026-03-11 00:47:16.880043 | orchestrator | Wednesday 11 March 2026 00:46:48 +0000 (0:00:01.346) 0:01:05.065 ******* 2026-03-11 00:47:16.880049 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2026-03-11 00:47:16.880057 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2026-03-11 00:47:16.880064 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2026-03-11 00:47:16.880071 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2026-03-11 00:47:16.880089 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2026-03-11 00:47:16.880096 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2026-03-11 00:47:16.880103 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2026-03-11 00:47:16.880110 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2026-03-11 00:47:16.880117 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2026-03-11 00:47:16.880123 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2026-03-11 00:47:16.880130 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2026-03-11 00:47:16.880136 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2026-03-11 00:47:16.880142 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2026-03-11 00:47:16.880149 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2026-03-11 00:47:16.880155 | orchestrator | 2026-03-11 00:47:16.880161 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2026-03-11 00:47:16.880169 | orchestrator | Wednesday 11 March 2026 00:46:54 +0000 (0:00:05.590) 0:01:10.655 ******* 2026-03-11 00:47:16.880174 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:16.880181 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:47:16.880187 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:47:16.880193 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:47:16.880199 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:47:16.880205 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:47:16.880211 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:47:16.880217 | orchestrator | 2026-03-11 00:47:16.880223 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2026-03-11 00:47:16.880229 | orchestrator | Wednesday 11 March 2026 00:46:55 +0000 (0:00:01.428) 0:01:12.084 ******* 2026-03-11 00:47:16.880243 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:16.880249 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:47:16.880255 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:47:16.880262 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:47:16.880268 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:47:16.880274 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:47:16.880280 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:47:16.880288 | orchestrator | 2026-03-11 00:47:16.880295 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2026-03-11 00:47:16.880325 | orchestrator | Wednesday 11 March 2026 00:46:57 +0000 (0:00:01.436) 0:01:13.521 ******* 2026-03-11 00:47:16.880332 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:47:16.880338 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:47:16.880344 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:47:16.880350 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:16.880356 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:47:16.880362 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:47:16.880367 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:47:16.880373 | orchestrator | 2026-03-11 00:47:16.880379 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2026-03-11 00:47:16.880385 | orchestrator | Wednesday 11 March 2026 00:46:59 +0000 (0:00:01.708) 0:01:15.230 ******* 2026-03-11 00:47:16.880391 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:47:16.880397 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:47:16.880403 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:47:16.880409 | orchestrator | ok: [testbed-manager] 2026-03-11 00:47:16.880415 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:47:16.880433 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:47:16.880440 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:47:16.880446 | orchestrator | 2026-03-11 00:47:16.880453 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2026-03-11 00:47:16.880460 | orchestrator | Wednesday 11 March 2026 00:47:01 +0000 (0:00:02.081) 0:01:17.313 ******* 2026-03-11 00:47:16.880466 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2026-03-11 00:47:16.880475 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:47:16.880482 | orchestrator | 2026-03-11 00:47:16.880488 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2026-03-11 00:47:16.880494 | orchestrator | Wednesday 11 March 2026 00:47:02 +0000 (0:00:01.280) 0:01:18.593 ******* 2026-03-11 00:47:16.880500 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:16.880506 | orchestrator | 2026-03-11 00:47:16.880512 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2026-03-11 00:47:16.880517 | orchestrator | Wednesday 11 March 2026 00:47:05 +0000 (0:00:02.729) 0:01:21.322 ******* 2026-03-11 00:47:16.880523 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:47:16.880530 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:47:16.880535 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:47:16.880541 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:47:16.880547 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:47:16.880553 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:47:16.880559 | orchestrator | changed: [testbed-manager] 2026-03-11 00:47:16.880565 | orchestrator | 2026-03-11 00:47:16.880571 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:47:16.880577 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:47:16.880585 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:47:16.880591 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:47:16.880605 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:47:16.880621 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:47:16.880627 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:47:16.880633 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:47:16.880640 | orchestrator | 2026-03-11 00:47:16.880646 | orchestrator | 2026-03-11 00:47:16.880652 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:47:16.880658 | orchestrator | Wednesday 11 March 2026 00:47:16 +0000 (0:00:10.928) 0:01:32.251 ******* 2026-03-11 00:47:16.880664 | orchestrator | =============================================================================== 2026-03-11 00:47:16.880669 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 37.71s 2026-03-11 00:47:16.880675 | orchestrator | osism.services.netdata : Add repository -------------------------------- 13.04s 2026-03-11 00:47:16.880681 | orchestrator | osism.services.netdata : Restart service netdata ----------------------- 10.93s 2026-03-11 00:47:16.880687 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 5.59s 2026-03-11 00:47:16.880693 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.76s 2026-03-11 00:47:16.880698 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 3.23s 2026-03-11 00:47:16.880704 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.73s 2026-03-11 00:47:16.880711 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.08s 2026-03-11 00:47:16.880717 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.06s 2026-03-11 00:47:16.880724 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.80s 2026-03-11 00:47:16.880736 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.71s 2026-03-11 00:47:16.880742 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.44s 2026-03-11 00:47:16.880784 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.43s 2026-03-11 00:47:16.880789 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.35s 2026-03-11 00:47:16.880793 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.35s 2026-03-11 00:47:16.880797 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.28s 2026-03-11 00:47:16.880801 | orchestrator | 2026-03-11 00:47:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:19.932847 | orchestrator | 2026-03-11 00:47:19 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:19.935286 | orchestrator | 2026-03-11 00:47:19 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:19.936885 | orchestrator | 2026-03-11 00:47:19 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:19.936950 | orchestrator | 2026-03-11 00:47:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:22.977475 | orchestrator | 2026-03-11 00:47:22 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:22.977526 | orchestrator | 2026-03-11 00:47:22 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:22.977835 | orchestrator | 2026-03-11 00:47:22 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:22.977890 | orchestrator | 2026-03-11 00:47:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:26.031383 | orchestrator | 2026-03-11 00:47:26 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:26.032216 | orchestrator | 2026-03-11 00:47:26 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:26.036189 | orchestrator | 2026-03-11 00:47:26 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:26.036237 | orchestrator | 2026-03-11 00:47:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:29.093937 | orchestrator | 2026-03-11 00:47:29 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:29.095470 | orchestrator | 2026-03-11 00:47:29 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:29.097521 | orchestrator | 2026-03-11 00:47:29 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:29.097558 | orchestrator | 2026-03-11 00:47:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:32.156848 | orchestrator | 2026-03-11 00:47:32 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:32.158556 | orchestrator | 2026-03-11 00:47:32 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:32.159645 | orchestrator | 2026-03-11 00:47:32 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:32.159688 | orchestrator | 2026-03-11 00:47:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:35.213310 | orchestrator | 2026-03-11 00:47:35 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:35.215011 | orchestrator | 2026-03-11 00:47:35 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:35.215064 | orchestrator | 2026-03-11 00:47:35 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:35.215073 | orchestrator | 2026-03-11 00:47:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:38.257102 | orchestrator | 2026-03-11 00:47:38 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:38.258543 | orchestrator | 2026-03-11 00:47:38 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:38.260393 | orchestrator | 2026-03-11 00:47:38 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:38.260452 | orchestrator | 2026-03-11 00:47:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:41.306067 | orchestrator | 2026-03-11 00:47:41 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:41.306143 | orchestrator | 2026-03-11 00:47:41 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:41.308651 | orchestrator | 2026-03-11 00:47:41 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:41.308719 | orchestrator | 2026-03-11 00:47:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:44.352972 | orchestrator | 2026-03-11 00:47:44 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:44.354213 | orchestrator | 2026-03-11 00:47:44 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:44.355125 | orchestrator | 2026-03-11 00:47:44 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:44.355159 | orchestrator | 2026-03-11 00:47:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:47.396190 | orchestrator | 2026-03-11 00:47:47 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:47.397819 | orchestrator | 2026-03-11 00:47:47 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:47.401069 | orchestrator | 2026-03-11 00:47:47 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:47.401126 | orchestrator | 2026-03-11 00:47:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:50.444276 | orchestrator | 2026-03-11 00:47:50 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:50.450622 | orchestrator | 2026-03-11 00:47:50 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:50.453706 | orchestrator | 2026-03-11 00:47:50 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:50.454222 | orchestrator | 2026-03-11 00:47:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:53.488828 | orchestrator | 2026-03-11 00:47:53 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:53.490517 | orchestrator | 2026-03-11 00:47:53 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:53.492643 | orchestrator | 2026-03-11 00:47:53 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:53.492698 | orchestrator | 2026-03-11 00:47:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:56.526835 | orchestrator | 2026-03-11 00:47:56 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:56.527119 | orchestrator | 2026-03-11 00:47:56 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:56.528125 | orchestrator | 2026-03-11 00:47:56 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:56.528157 | orchestrator | 2026-03-11 00:47:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:47:59.583657 | orchestrator | 2026-03-11 00:47:59 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:47:59.586008 | orchestrator | 2026-03-11 00:47:59 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:47:59.589099 | orchestrator | 2026-03-11 00:47:59 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:47:59.589196 | orchestrator | 2026-03-11 00:47:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:02.625744 | orchestrator | 2026-03-11 00:48:02 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:02.627351 | orchestrator | 2026-03-11 00:48:02 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:48:02.628747 | orchestrator | 2026-03-11 00:48:02 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:02.628814 | orchestrator | 2026-03-11 00:48:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:05.665855 | orchestrator | 2026-03-11 00:48:05 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:05.667591 | orchestrator | 2026-03-11 00:48:05 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:48:05.668989 | orchestrator | 2026-03-11 00:48:05 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:05.669048 | orchestrator | 2026-03-11 00:48:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:08.707647 | orchestrator | 2026-03-11 00:48:08 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:08.708913 | orchestrator | 2026-03-11 00:48:08 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state STARTED 2026-03-11 00:48:08.708962 | orchestrator | 2026-03-11 00:48:08 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:08.708971 | orchestrator | 2026-03-11 00:48:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:11.753046 | orchestrator | 2026-03-11 00:48:11 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:11.753135 | orchestrator | 2026-03-11 00:48:11 | INFO  | Task 981a7882-e1cc-4dd6-93e1-cf14c8585311 is in state STARTED 2026-03-11 00:48:11.756522 | orchestrator | 2026-03-11 00:48:11 | INFO  | Task 82ba40c0-9407-409b-84f0-bb2e113d26f8 is in state STARTED 2026-03-11 00:48:11.762847 | orchestrator | 2026-03-11 00:48:11 | INFO  | Task 650ecef9-9f63-4d99-aaf4-03b4acd35cbd is in state SUCCESS 2026-03-11 00:48:11.765202 | orchestrator | 2026-03-11 00:48:11.765260 | orchestrator | 2026-03-11 00:48:11.765269 | orchestrator | PLAY [Apply role common] ******************************************************* 2026-03-11 00:48:11.765277 | orchestrator | 2026-03-11 00:48:11.765284 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-11 00:48:11.765291 | orchestrator | Wednesday 11 March 2026 00:45:37 +0000 (0:00:00.316) 0:00:00.316 ******* 2026-03-11 00:48:11.765299 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:48:11.765306 | orchestrator | 2026-03-11 00:48:11.765313 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2026-03-11 00:48:11.765319 | orchestrator | Wednesday 11 March 2026 00:45:38 +0000 (0:00:01.072) 0:00:01.389 ******* 2026-03-11 00:48:11.765326 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:11.765332 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:11.765339 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:11.765345 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:11.765352 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:11.765358 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:11.765364 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:11.765370 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:11.765377 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:11.765384 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:11.765390 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:11.765396 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:11.765403 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:11.765409 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:11.765415 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2026-03-11 00:48:11.765421 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:11.765429 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:11.765435 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:11.765459 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:11.765465 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2026-03-11 00:48:11.765472 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2026-03-11 00:48:11.765478 | orchestrator | 2026-03-11 00:48:11.765484 | orchestrator | TASK [common : include_tasks] ************************************************** 2026-03-11 00:48:11.765490 | orchestrator | Wednesday 11 March 2026 00:45:43 +0000 (0:00:04.507) 0:00:05.897 ******* 2026-03-11 00:48:11.765497 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:48:11.765504 | orchestrator | 2026-03-11 00:48:11.765510 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2026-03-11 00:48:11.765517 | orchestrator | Wednesday 11 March 2026 00:45:44 +0000 (0:00:01.161) 0:00:07.058 ******* 2026-03-11 00:48:11.765532 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.765547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.765585 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.765593 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.765600 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765606 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765618 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.765625 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.765634 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.765656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765661 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765666 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765676 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765686 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765712 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765749 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765756 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765763 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765769 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.765783 | orchestrator | 2026-03-11 00:48:11.765789 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2026-03-11 00:48:11.765795 | orchestrator | Wednesday 11 March 2026 00:45:49 +0000 (0:00:05.540) 0:00:12.599 ******* 2026-03-11 00:48:11.765802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.765810 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.765817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.765827 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:48:11.765840 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.765865 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.765872 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.765879 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:48:11.765886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.765898 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.765904 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.765911 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:48:11.765917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.765923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.765932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.765947 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.765955 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.765966 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.765972 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:48:11.765979 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:48:11.765985 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.765992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.765998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766005 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:48:11.766070 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.766090 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766098 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766111 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:48:11.766118 | orchestrator | 2026-03-11 00:48:11.766125 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2026-03-11 00:48:11.766132 | orchestrator | Wednesday 11 March 2026 00:45:52 +0000 (0:00:02.266) 0:00:14.866 ******* 2026-03-11 00:48:11.766137 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.766143 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766150 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766157 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:48:11.766164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.766171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766187 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:48:11.766202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.766215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766222 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766229 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:48:11.766235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.766243 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766257 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:48:11.766264 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.766275 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766293 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766299 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:48:11.766307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.766315 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766322 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766328 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:48:11.766335 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2026-03-11 00:48:11.766343 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766352 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.766365 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:48:11.766372 | orchestrator | 2026-03-11 00:48:11.766379 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2026-03-11 00:48:11.766386 | orchestrator | Wednesday 11 March 2026 00:45:54 +0000 (0:00:02.854) 0:00:17.720 ******* 2026-03-11 00:48:11.766393 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:48:11.766399 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:48:11.766406 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:48:11.766412 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:48:11.766419 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:48:11.766429 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:48:11.766435 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:48:11.766441 | orchestrator | 2026-03-11 00:48:11.766446 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2026-03-11 00:48:11.766453 | orchestrator | Wednesday 11 March 2026 00:45:56 +0000 (0:00:01.144) 0:00:18.865 ******* 2026-03-11 00:48:11.766459 | orchestrator | skipping: [testbed-manager] 2026-03-11 00:48:11.766465 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:48:11.766472 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:48:11.766478 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:48:11.766484 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:48:11.766490 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:48:11.766496 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:48:11.766502 | orchestrator | 2026-03-11 00:48:11.766509 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2026-03-11 00:48:11.766516 | orchestrator | Wednesday 11 March 2026 00:45:57 +0000 (0:00:01.252) 0:00:20.117 ******* 2026-03-11 00:48:11.766523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.766531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.766540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766548 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.766560 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.766571 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.766583 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766590 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766596 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.766602 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766608 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.766615 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766629 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766648 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766655 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766672 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766679 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766741 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.766755 | orchestrator | 2026-03-11 00:48:11.766763 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2026-03-11 00:48:11.766769 | orchestrator | Wednesday 11 March 2026 00:46:06 +0000 (0:00:09.292) 0:00:29.410 ******* 2026-03-11 00:48:11.766775 | orchestrator | [WARNING]: Skipped 2026-03-11 00:48:11.766789 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2026-03-11 00:48:11.766797 | orchestrator | to this access issue: 2026-03-11 00:48:11.766804 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2026-03-11 00:48:11.766811 | orchestrator | directory 2026-03-11 00:48:11.766817 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 00:48:11.766823 | orchestrator | 2026-03-11 00:48:11.766829 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2026-03-11 00:48:11.766836 | orchestrator | Wednesday 11 March 2026 00:46:08 +0000 (0:00:01.766) 0:00:31.176 ******* 2026-03-11 00:48:11.766842 | orchestrator | [WARNING]: Skipped 2026-03-11 00:48:11.766848 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2026-03-11 00:48:11.766859 | orchestrator | to this access issue: 2026-03-11 00:48:11.766865 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2026-03-11 00:48:11.766870 | orchestrator | directory 2026-03-11 00:48:11.766876 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 00:48:11.766883 | orchestrator | 2026-03-11 00:48:11.766890 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2026-03-11 00:48:11.766896 | orchestrator | Wednesday 11 March 2026 00:46:09 +0000 (0:00:00.722) 0:00:31.899 ******* 2026-03-11 00:48:11.766903 | orchestrator | [WARNING]: Skipped 2026-03-11 00:48:11.766909 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2026-03-11 00:48:11.766915 | orchestrator | to this access issue: 2026-03-11 00:48:11.766922 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2026-03-11 00:48:11.766929 | orchestrator | directory 2026-03-11 00:48:11.766936 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 00:48:11.766942 | orchestrator | 2026-03-11 00:48:11.766950 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2026-03-11 00:48:11.766957 | orchestrator | Wednesday 11 March 2026 00:46:10 +0000 (0:00:01.022) 0:00:32.922 ******* 2026-03-11 00:48:11.766964 | orchestrator | [WARNING]: Skipped 2026-03-11 00:48:11.766971 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2026-03-11 00:48:11.766977 | orchestrator | to this access issue: 2026-03-11 00:48:11.766983 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2026-03-11 00:48:11.766989 | orchestrator | directory 2026-03-11 00:48:11.766995 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 00:48:11.767001 | orchestrator | 2026-03-11 00:48:11.767008 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2026-03-11 00:48:11.767019 | orchestrator | Wednesday 11 March 2026 00:46:10 +0000 (0:00:00.710) 0:00:33.633 ******* 2026-03-11 00:48:11.767025 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:11.767032 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:11.767038 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:11.767044 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:11.767049 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:11.767053 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:11.767056 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:11.767060 | orchestrator | 2026-03-11 00:48:11.767064 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2026-03-11 00:48:11.767068 | orchestrator | Wednesday 11 March 2026 00:46:16 +0000 (0:00:05.386) 0:00:39.019 ******* 2026-03-11 00:48:11.767071 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:11.767076 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:11.767080 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:11.767084 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:11.767087 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:11.767091 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:11.767095 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2026-03-11 00:48:11.767098 | orchestrator | 2026-03-11 00:48:11.767102 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2026-03-11 00:48:11.767106 | orchestrator | Wednesday 11 March 2026 00:46:19 +0000 (0:00:03.148) 0:00:42.167 ******* 2026-03-11 00:48:11.767109 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:11.767113 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:11.767117 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:11.767121 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:11.767124 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:11.767128 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:11.767131 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:11.767135 | orchestrator | 2026-03-11 00:48:11.767139 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2026-03-11 00:48:11.767143 | orchestrator | Wednesday 11 March 2026 00:46:22 +0000 (0:00:03.402) 0:00:45.570 ******* 2026-03-11 00:48:11.767150 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767158 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.767162 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767169 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.767177 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767181 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.767185 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767192 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767203 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.767210 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.767215 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767221 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767227 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767233 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.767249 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767263 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:48:11.767279 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767287 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767293 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767299 | orchestrator | 2026-03-11 00:48:11.767304 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2026-03-11 00:48:11.767310 | orchestrator | Wednesday 11 March 2026 00:46:25 +0000 (0:00:03.145) 0:00:48.716 ******* 2026-03-11 00:48:11.767315 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:11.767321 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:11.767326 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:11.767332 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:11.767337 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:11.767343 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:11.767349 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2026-03-11 00:48:11.767355 | orchestrator | 2026-03-11 00:48:11.767360 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2026-03-11 00:48:11.767367 | orchestrator | Wednesday 11 March 2026 00:46:28 +0000 (0:00:02.975) 0:00:51.691 ******* 2026-03-11 00:48:11.767373 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:11.767379 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:11.767385 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:11.767391 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:11.767397 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:11.767402 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:11.767408 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2026-03-11 00:48:11.767418 | orchestrator | 2026-03-11 00:48:11.767423 | orchestrator | TASK [common : Check common containers] **************************************** 2026-03-11 00:48:11.767429 | orchestrator | Wednesday 11 March 2026 00:46:31 +0000 (0:00:02.851) 0:00:54.542 ******* 2026-03-11 00:48:11.767436 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767462 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767469 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767475 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767482 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767493 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2026-03-11 00:48:11.767503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767507 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767511 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767519 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767523 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767529 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767546 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767550 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767557 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:48:11.767561 | orchestrator | 2026-03-11 00:48:11.767565 | orchestrator | TASK [common : Creating log volume] ******************************************** 2026-03-11 00:48:11.767569 | orchestrator | Wednesday 11 March 2026 00:46:35 +0000 (0:00:03.361) 0:00:57.904 ******* 2026-03-11 00:48:11.767572 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:11.767576 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:11.767580 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:11.767584 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:11.767590 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:11.767594 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:11.767598 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:11.767601 | orchestrator | 2026-03-11 00:48:11.767605 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2026-03-11 00:48:11.767609 | orchestrator | Wednesday 11 March 2026 00:46:36 +0000 (0:00:01.507) 0:00:59.411 ******* 2026-03-11 00:48:11.767612 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:11.767616 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:11.767620 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:11.767623 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:11.767627 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:11.767631 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:11.767634 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:11.767638 | orchestrator | 2026-03-11 00:48:11.767642 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:11.767646 | orchestrator | Wednesday 11 March 2026 00:46:37 +0000 (0:00:01.209) 0:01:00.620 ******* 2026-03-11 00:48:11.767649 | orchestrator | 2026-03-11 00:48:11.767653 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:11.767657 | orchestrator | Wednesday 11 March 2026 00:46:37 +0000 (0:00:00.089) 0:01:00.710 ******* 2026-03-11 00:48:11.767660 | orchestrator | 2026-03-11 00:48:11.767664 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:11.767668 | orchestrator | Wednesday 11 March 2026 00:46:38 +0000 (0:00:00.067) 0:01:00.778 ******* 2026-03-11 00:48:11.767671 | orchestrator | 2026-03-11 00:48:11.767675 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:11.767679 | orchestrator | Wednesday 11 March 2026 00:46:38 +0000 (0:00:00.217) 0:01:00.996 ******* 2026-03-11 00:48:11.767683 | orchestrator | 2026-03-11 00:48:11.767686 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:11.767720 | orchestrator | Wednesday 11 March 2026 00:46:38 +0000 (0:00:00.064) 0:01:01.060 ******* 2026-03-11 00:48:11.767724 | orchestrator | 2026-03-11 00:48:11.767728 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:11.767732 | orchestrator | Wednesday 11 March 2026 00:46:38 +0000 (0:00:00.061) 0:01:01.122 ******* 2026-03-11 00:48:11.767736 | orchestrator | 2026-03-11 00:48:11.767740 | orchestrator | TASK [common : Flush handlers] ************************************************* 2026-03-11 00:48:11.767743 | orchestrator | Wednesday 11 March 2026 00:46:38 +0000 (0:00:00.061) 0:01:01.183 ******* 2026-03-11 00:48:11.767747 | orchestrator | 2026-03-11 00:48:11.767751 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2026-03-11 00:48:11.767757 | orchestrator | Wednesday 11 March 2026 00:46:38 +0000 (0:00:00.082) 0:01:01.266 ******* 2026-03-11 00:48:11.767761 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:11.767765 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:11.767768 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:11.767772 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:11.767776 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:11.767780 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:11.767783 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:11.767787 | orchestrator | 2026-03-11 00:48:11.767791 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2026-03-11 00:48:11.767795 | orchestrator | Wednesday 11 March 2026 00:47:09 +0000 (0:00:31.447) 0:01:32.713 ******* 2026-03-11 00:48:11.767800 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:11.767806 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:11.767812 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:11.767817 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:11.767823 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:11.767829 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:11.767834 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:11.767845 | orchestrator | 2026-03-11 00:48:11.767850 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2026-03-11 00:48:11.767856 | orchestrator | Wednesday 11 March 2026 00:47:56 +0000 (0:00:46.866) 0:02:19.580 ******* 2026-03-11 00:48:11.767862 | orchestrator | ok: [testbed-manager] 2026-03-11 00:48:11.767868 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:48:11.767875 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:48:11.767881 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:48:11.767887 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:48:11.767894 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:48:11.767898 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:48:11.767902 | orchestrator | 2026-03-11 00:48:11.767906 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2026-03-11 00:48:11.767910 | orchestrator | Wednesday 11 March 2026 00:47:58 +0000 (0:00:02.147) 0:02:21.727 ******* 2026-03-11 00:48:11.767913 | orchestrator | changed: [testbed-manager] 2026-03-11 00:48:11.767917 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:11.767921 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:11.767924 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:11.767928 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:48:11.767932 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:48:11.767935 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:48:11.767939 | orchestrator | 2026-03-11 00:48:11.767943 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:48:11.767948 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:11.767953 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:11.767959 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:11.767965 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:11.767971 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:11.767977 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:11.767983 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2026-03-11 00:48:11.767988 | orchestrator | 2026-03-11 00:48:11.767994 | orchestrator | 2026-03-11 00:48:11.768001 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:48:11.768008 | orchestrator | Wednesday 11 March 2026 00:48:10 +0000 (0:00:11.027) 0:02:32.755 ******* 2026-03-11 00:48:11.768014 | orchestrator | =============================================================================== 2026-03-11 00:48:11.768020 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 46.87s 2026-03-11 00:48:11.768026 | orchestrator | common : Restart fluentd container ------------------------------------- 31.45s 2026-03-11 00:48:11.768032 | orchestrator | common : Restart cron container ---------------------------------------- 11.03s 2026-03-11 00:48:11.768036 | orchestrator | common : Copying over config.json files for services -------------------- 9.29s 2026-03-11 00:48:11.768039 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 5.54s 2026-03-11 00:48:11.768043 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 5.39s 2026-03-11 00:48:11.768047 | orchestrator | common : Ensuring config directories exist ------------------------------ 4.51s 2026-03-11 00:48:11.768054 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.40s 2026-03-11 00:48:11.768062 | orchestrator | common : Check common containers ---------------------------------------- 3.36s 2026-03-11 00:48:11.768066 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.15s 2026-03-11 00:48:11.768070 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.15s 2026-03-11 00:48:11.768073 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.98s 2026-03-11 00:48:11.768077 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.85s 2026-03-11 00:48:11.768081 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 2.85s 2026-03-11 00:48:11.768088 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 2.27s 2026-03-11 00:48:11.768092 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.15s 2026-03-11 00:48:11.768096 | orchestrator | common : Find custom fluentd input config files ------------------------- 1.77s 2026-03-11 00:48:11.768100 | orchestrator | common : Creating log volume -------------------------------------------- 1.51s 2026-03-11 00:48:11.768103 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.25s 2026-03-11 00:48:11.768107 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.21s 2026-03-11 00:48:11.771074 | orchestrator | 2026-03-11 00:48:11 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:11.771634 | orchestrator | 2026-03-11 00:48:11 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:11.772561 | orchestrator | 2026-03-11 00:48:11 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:11.772599 | orchestrator | 2026-03-11 00:48:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:14.816780 | orchestrator | 2026-03-11 00:48:14 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:14.817633 | orchestrator | 2026-03-11 00:48:14 | INFO  | Task 981a7882-e1cc-4dd6-93e1-cf14c8585311 is in state STARTED 2026-03-11 00:48:14.818340 | orchestrator | 2026-03-11 00:48:14 | INFO  | Task 82ba40c0-9407-409b-84f0-bb2e113d26f8 is in state STARTED 2026-03-11 00:48:14.819224 | orchestrator | 2026-03-11 00:48:14 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:14.820052 | orchestrator | 2026-03-11 00:48:14 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:14.820784 | orchestrator | 2026-03-11 00:48:14 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:14.820855 | orchestrator | 2026-03-11 00:48:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:17.853789 | orchestrator | 2026-03-11 00:48:17 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:17.854121 | orchestrator | 2026-03-11 00:48:17 | INFO  | Task 981a7882-e1cc-4dd6-93e1-cf14c8585311 is in state STARTED 2026-03-11 00:48:17.854952 | orchestrator | 2026-03-11 00:48:17 | INFO  | Task 82ba40c0-9407-409b-84f0-bb2e113d26f8 is in state STARTED 2026-03-11 00:48:17.855525 | orchestrator | 2026-03-11 00:48:17 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:17.856089 | orchestrator | 2026-03-11 00:48:17 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:17.858079 | orchestrator | 2026-03-11 00:48:17 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:17.858103 | orchestrator | 2026-03-11 00:48:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:20.890892 | orchestrator | 2026-03-11 00:48:20 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:20.890953 | orchestrator | 2026-03-11 00:48:20 | INFO  | Task 981a7882-e1cc-4dd6-93e1-cf14c8585311 is in state STARTED 2026-03-11 00:48:20.890958 | orchestrator | 2026-03-11 00:48:20 | INFO  | Task 82ba40c0-9407-409b-84f0-bb2e113d26f8 is in state STARTED 2026-03-11 00:48:20.891843 | orchestrator | 2026-03-11 00:48:20 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:20.891874 | orchestrator | 2026-03-11 00:48:20 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:20.894856 | orchestrator | 2026-03-11 00:48:20 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:20.894889 | orchestrator | 2026-03-11 00:48:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:24.002321 | orchestrator | 2026-03-11 00:48:24 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:24.002606 | orchestrator | 2026-03-11 00:48:24 | INFO  | Task 981a7882-e1cc-4dd6-93e1-cf14c8585311 is in state STARTED 2026-03-11 00:48:24.003915 | orchestrator | 2026-03-11 00:48:24 | INFO  | Task 82ba40c0-9407-409b-84f0-bb2e113d26f8 is in state STARTED 2026-03-11 00:48:24.004858 | orchestrator | 2026-03-11 00:48:24 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:24.016910 | orchestrator | 2026-03-11 00:48:24 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:24.016979 | orchestrator | 2026-03-11 00:48:24 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:24.016989 | orchestrator | 2026-03-11 00:48:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:27.041947 | orchestrator | 2026-03-11 00:48:27 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:27.042806 | orchestrator | 2026-03-11 00:48:27 | INFO  | Task 981a7882-e1cc-4dd6-93e1-cf14c8585311 is in state STARTED 2026-03-11 00:48:27.043514 | orchestrator | 2026-03-11 00:48:27 | INFO  | Task 82ba40c0-9407-409b-84f0-bb2e113d26f8 is in state STARTED 2026-03-11 00:48:27.044628 | orchestrator | 2026-03-11 00:48:27 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:27.047696 | orchestrator | 2026-03-11 00:48:27 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:27.048299 | orchestrator | 2026-03-11 00:48:27 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:27.048412 | orchestrator | 2026-03-11 00:48:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:30.080537 | orchestrator | 2026-03-11 00:48:30 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:30.080871 | orchestrator | 2026-03-11 00:48:30 | INFO  | Task 981a7882-e1cc-4dd6-93e1-cf14c8585311 is in state SUCCESS 2026-03-11 00:48:30.081781 | orchestrator | 2026-03-11 00:48:30 | INFO  | Task 82ba40c0-9407-409b-84f0-bb2e113d26f8 is in state STARTED 2026-03-11 00:48:30.083522 | orchestrator | 2026-03-11 00:48:30 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:30.101429 | orchestrator | 2026-03-11 00:48:30 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:30.101475 | orchestrator | 2026-03-11 00:48:30 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:30.101480 | orchestrator | 2026-03-11 00:48:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:33.362795 | orchestrator | 2026-03-11 00:48:33 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:48:33.362870 | orchestrator | 2026-03-11 00:48:33 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:33.362876 | orchestrator | 2026-03-11 00:48:33 | INFO  | Task 82ba40c0-9407-409b-84f0-bb2e113d26f8 is in state STARTED 2026-03-11 00:48:33.362881 | orchestrator | 2026-03-11 00:48:33 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:33.362885 | orchestrator | 2026-03-11 00:48:33 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:33.362889 | orchestrator | 2026-03-11 00:48:33 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:33.362893 | orchestrator | 2026-03-11 00:48:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:36.279838 | orchestrator | 2026-03-11 00:48:36 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:48:36.283532 | orchestrator | 2026-03-11 00:48:36 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:36.285188 | orchestrator | 2026-03-11 00:48:36 | INFO  | Task 82ba40c0-9407-409b-84f0-bb2e113d26f8 is in state STARTED 2026-03-11 00:48:36.286751 | orchestrator | 2026-03-11 00:48:36 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:36.288502 | orchestrator | 2026-03-11 00:48:36 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:36.290754 | orchestrator | 2026-03-11 00:48:36 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:36.290857 | orchestrator | 2026-03-11 00:48:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:39.333928 | orchestrator | 2026-03-11 00:48:39 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:48:39.334067 | orchestrator | 2026-03-11 00:48:39 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:39.335505 | orchestrator | 2026-03-11 00:48:39 | INFO  | Task 82ba40c0-9407-409b-84f0-bb2e113d26f8 is in state STARTED 2026-03-11 00:48:39.336193 | orchestrator | 2026-03-11 00:48:39 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:39.336879 | orchestrator | 2026-03-11 00:48:39 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:39.339323 | orchestrator | 2026-03-11 00:48:39 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:39.339362 | orchestrator | 2026-03-11 00:48:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:42.383063 | orchestrator | 2026-03-11 00:48:42 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:48:42.384307 | orchestrator | 2026-03-11 00:48:42 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:42.389761 | orchestrator | 2026-03-11 00:48:42 | INFO  | Task 82ba40c0-9407-409b-84f0-bb2e113d26f8 is in state STARTED 2026-03-11 00:48:42.390096 | orchestrator | 2026-03-11 00:48:42 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:42.390721 | orchestrator | 2026-03-11 00:48:42 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:42.391221 | orchestrator | 2026-03-11 00:48:42 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:42.391345 | orchestrator | 2026-03-11 00:48:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:45.420780 | orchestrator | 2026-03-11 00:48:45 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:48:45.421591 | orchestrator | 2026-03-11 00:48:45 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:45.423176 | orchestrator | 2026-03-11 00:48:45 | INFO  | Task 82ba40c0-9407-409b-84f0-bb2e113d26f8 is in state STARTED 2026-03-11 00:48:45.425086 | orchestrator | 2026-03-11 00:48:45 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:45.427349 | orchestrator | 2026-03-11 00:48:45 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:45.427857 | orchestrator | 2026-03-11 00:48:45 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:45.427987 | orchestrator | 2026-03-11 00:48:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:48.522578 | orchestrator | 2026-03-11 00:48:48.522699 | orchestrator | 2026-03-11 00:48:48.522787 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:48:48.522806 | orchestrator | 2026-03-11 00:48:48.522821 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:48:48.522839 | orchestrator | Wednesday 11 March 2026 00:48:17 +0000 (0:00:00.215) 0:00:00.215 ******* 2026-03-11 00:48:48.522855 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:48:48.522873 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:48:48.522890 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:48:48.522907 | orchestrator | 2026-03-11 00:48:48.522923 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:48:48.522941 | orchestrator | Wednesday 11 March 2026 00:48:18 +0000 (0:00:00.739) 0:00:00.955 ******* 2026-03-11 00:48:48.522958 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2026-03-11 00:48:48.522974 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2026-03-11 00:48:48.522990 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2026-03-11 00:48:48.523007 | orchestrator | 2026-03-11 00:48:48.523024 | orchestrator | PLAY [Apply role memcached] **************************************************** 2026-03-11 00:48:48.523040 | orchestrator | 2026-03-11 00:48:48.523057 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2026-03-11 00:48:48.523073 | orchestrator | Wednesday 11 March 2026 00:48:19 +0000 (0:00:00.829) 0:00:01.784 ******* 2026-03-11 00:48:48.523090 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:48:48.523108 | orchestrator | 2026-03-11 00:48:48.523123 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2026-03-11 00:48:48.523140 | orchestrator | Wednesday 11 March 2026 00:48:20 +0000 (0:00:00.780) 0:00:02.564 ******* 2026-03-11 00:48:48.523156 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-11 00:48:48.523174 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-11 00:48:48.523190 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-11 00:48:48.523205 | orchestrator | 2026-03-11 00:48:48.523222 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2026-03-11 00:48:48.523238 | orchestrator | Wednesday 11 March 2026 00:48:21 +0000 (0:00:01.050) 0:00:03.614 ******* 2026-03-11 00:48:48.523254 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2026-03-11 00:48:48.523272 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2026-03-11 00:48:48.523288 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2026-03-11 00:48:48.523304 | orchestrator | 2026-03-11 00:48:48.523319 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2026-03-11 00:48:48.523357 | orchestrator | Wednesday 11 March 2026 00:48:23 +0000 (0:00:02.286) 0:00:05.901 ******* 2026-03-11 00:48:48.523376 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:48.523393 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:48.523410 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:48.523426 | orchestrator | 2026-03-11 00:48:48.523442 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2026-03-11 00:48:48.523488 | orchestrator | Wednesday 11 March 2026 00:48:25 +0000 (0:00:02.462) 0:00:08.364 ******* 2026-03-11 00:48:48.523506 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:48.523523 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:48.523539 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:48.523555 | orchestrator | 2026-03-11 00:48:48.523572 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:48:48.523589 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:48:48.523607 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:48:48.523624 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:48:48.523640 | orchestrator | 2026-03-11 00:48:48.523656 | orchestrator | 2026-03-11 00:48:48.523672 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:48:48.523689 | orchestrator | Wednesday 11 March 2026 00:48:28 +0000 (0:00:03.021) 0:00:11.385 ******* 2026-03-11 00:48:48.523836 | orchestrator | =============================================================================== 2026-03-11 00:48:48.523859 | orchestrator | memcached : Restart memcached container --------------------------------- 3.02s 2026-03-11 00:48:48.523875 | orchestrator | memcached : Check memcached container ----------------------------------- 2.46s 2026-03-11 00:48:48.523892 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.29s 2026-03-11 00:48:48.523909 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.05s 2026-03-11 00:48:48.523925 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.83s 2026-03-11 00:48:48.523941 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.78s 2026-03-11 00:48:48.523957 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.74s 2026-03-11 00:48:48.523973 | orchestrator | 2026-03-11 00:48:48.523990 | orchestrator | 2026-03-11 00:48:48.524006 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:48:48.524022 | orchestrator | 2026-03-11 00:48:48.524038 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:48:48.524054 | orchestrator | Wednesday 11 March 2026 00:48:17 +0000 (0:00:00.337) 0:00:00.337 ******* 2026-03-11 00:48:48.524068 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:48:48.524082 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:48:48.524096 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:48:48.524109 | orchestrator | 2026-03-11 00:48:48.524122 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:48:48.524155 | orchestrator | Wednesday 11 March 2026 00:48:18 +0000 (0:00:00.528) 0:00:00.865 ******* 2026-03-11 00:48:48.524170 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2026-03-11 00:48:48.524183 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2026-03-11 00:48:48.524196 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2026-03-11 00:48:48.524208 | orchestrator | 2026-03-11 00:48:48.524219 | orchestrator | PLAY [Apply role redis] ******************************************************** 2026-03-11 00:48:48.524232 | orchestrator | 2026-03-11 00:48:48.524246 | orchestrator | TASK [redis : include_tasks] *************************************************** 2026-03-11 00:48:48.524259 | orchestrator | Wednesday 11 March 2026 00:48:19 +0000 (0:00:00.819) 0:00:01.685 ******* 2026-03-11 00:48:48.524272 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:48:48.524285 | orchestrator | 2026-03-11 00:48:48.524299 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2026-03-11 00:48:48.524313 | orchestrator | Wednesday 11 March 2026 00:48:20 +0000 (0:00:01.318) 0:00:03.004 ******* 2026-03-11 00:48:48.524330 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524429 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524453 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524467 | orchestrator | 2026-03-11 00:48:48.524481 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2026-03-11 00:48:48.524494 | orchestrator | Wednesday 11 March 2026 00:48:21 +0000 (0:00:01.460) 0:00:04.464 ******* 2026-03-11 00:48:48.524517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524545 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524559 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524595 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524610 | orchestrator | 2026-03-11 00:48:48.524624 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2026-03-11 00:48:48.524637 | orchestrator | Wednesday 11 March 2026 00:48:25 +0000 (0:00:03.684) 0:00:08.149 ******* 2026-03-11 00:48:48.524666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524682 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524755 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524770 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524784 | orchestrator | 2026-03-11 00:48:48.524803 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2026-03-11 00:48:48.524829 | orchestrator | Wednesday 11 March 2026 00:48:28 +0000 (0:00:02.964) 0:00:11.113 ******* 2026-03-11 00:48:48.524838 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524846 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2026-03-11 00:48:48.524894 | orchestrator | 2026-03-11 00:48:48.524902 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-11 00:48:48.524916 | orchestrator | Wednesday 11 March 2026 00:48:30 +0000 (0:00:02.073) 0:00:13.187 ******* 2026-03-11 00:48:48.524924 | orchestrator | 2026-03-11 00:48:48.524932 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-11 00:48:48.524945 | orchestrator | Wednesday 11 March 2026 00:48:30 +0000 (0:00:00.138) 0:00:13.326 ******* 2026-03-11 00:48:48.524953 | orchestrator | 2026-03-11 00:48:48.524961 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2026-03-11 00:48:48.524969 | orchestrator | Wednesday 11 March 2026 00:48:30 +0000 (0:00:00.135) 0:00:13.461 ******* 2026-03-11 00:48:48.524977 | orchestrator | 2026-03-11 00:48:48.524984 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2026-03-11 00:48:48.524992 | orchestrator | Wednesday 11 March 2026 00:48:31 +0000 (0:00:00.299) 0:00:13.761 ******* 2026-03-11 00:48:48.525000 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:48.525008 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:48.525016 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:48.525024 | orchestrator | 2026-03-11 00:48:48.525032 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2026-03-11 00:48:48.525040 | orchestrator | Wednesday 11 March 2026 00:48:35 +0000 (0:00:03.861) 0:00:17.622 ******* 2026-03-11 00:48:48.525048 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:48:48.525055 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:48:48.525063 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:48:48.525071 | orchestrator | 2026-03-11 00:48:48.525079 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:48:48.525087 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:48:48.525095 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:48:48.525103 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:48:48.525111 | orchestrator | 2026-03-11 00:48:48.525119 | orchestrator | 2026-03-11 00:48:48.525127 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:48:48.525135 | orchestrator | Wednesday 11 March 2026 00:48:45 +0000 (0:00:10.472) 0:00:28.095 ******* 2026-03-11 00:48:48.525143 | orchestrator | =============================================================================== 2026-03-11 00:48:48.525151 | orchestrator | redis : Restart redis-sentinel container ------------------------------- 10.47s 2026-03-11 00:48:48.525159 | orchestrator | redis : Restart redis container ----------------------------------------- 3.86s 2026-03-11 00:48:48.525166 | orchestrator | redis : Copying over default config.json files -------------------------- 3.68s 2026-03-11 00:48:48.525178 | orchestrator | redis : Copying over redis config files --------------------------------- 2.96s 2026-03-11 00:48:48.525186 | orchestrator | redis : Check redis containers ------------------------------------------ 2.07s 2026-03-11 00:48:48.525194 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.46s 2026-03-11 00:48:48.525202 | orchestrator | redis : include_tasks --------------------------------------------------- 1.32s 2026-03-11 00:48:48.525210 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.82s 2026-03-11 00:48:48.525218 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.57s 2026-03-11 00:48:48.525226 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.53s 2026-03-11 00:48:48.525234 | orchestrator | 2026-03-11 00:48:48 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:48:48.525242 | orchestrator | 2026-03-11 00:48:48 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:48.525250 | orchestrator | 2026-03-11 00:48:48 | INFO  | Task 82ba40c0-9407-409b-84f0-bb2e113d26f8 is in state SUCCESS 2026-03-11 00:48:48.525263 | orchestrator | 2026-03-11 00:48:48 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:48.525271 | orchestrator | 2026-03-11 00:48:48 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:48.525279 | orchestrator | 2026-03-11 00:48:48 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:48.525287 | orchestrator | 2026-03-11 00:48:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:51.521535 | orchestrator | 2026-03-11 00:48:51 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:48:51.526674 | orchestrator | 2026-03-11 00:48:51 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:51.526959 | orchestrator | 2026-03-11 00:48:51 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:51.527624 | orchestrator | 2026-03-11 00:48:51 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:51.528409 | orchestrator | 2026-03-11 00:48:51 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:51.528458 | orchestrator | 2026-03-11 00:48:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:54.562842 | orchestrator | 2026-03-11 00:48:54 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:48:54.564321 | orchestrator | 2026-03-11 00:48:54 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:54.565070 | orchestrator | 2026-03-11 00:48:54 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:54.565619 | orchestrator | 2026-03-11 00:48:54 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:54.568866 | orchestrator | 2026-03-11 00:48:54 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:54.568917 | orchestrator | 2026-03-11 00:48:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:48:57.614373 | orchestrator | 2026-03-11 00:48:57 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:48:57.616008 | orchestrator | 2026-03-11 00:48:57 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:48:57.616075 | orchestrator | 2026-03-11 00:48:57 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:48:57.616257 | orchestrator | 2026-03-11 00:48:57 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:48:57.618984 | orchestrator | 2026-03-11 00:48:57 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:48:57.619059 | orchestrator | 2026-03-11 00:48:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:00.670206 | orchestrator | 2026-03-11 00:49:00 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:00.670255 | orchestrator | 2026-03-11 00:49:00 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:00.670260 | orchestrator | 2026-03-11 00:49:00 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:00.670264 | orchestrator | 2026-03-11 00:49:00 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:00.670268 | orchestrator | 2026-03-11 00:49:00 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:49:00.670281 | orchestrator | 2026-03-11 00:49:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:03.708555 | orchestrator | 2026-03-11 00:49:03 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:03.709040 | orchestrator | 2026-03-11 00:49:03 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:03.710290 | orchestrator | 2026-03-11 00:49:03 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:03.710956 | orchestrator | 2026-03-11 00:49:03 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:03.711934 | orchestrator | 2026-03-11 00:49:03 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:49:03.711957 | orchestrator | 2026-03-11 00:49:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:06.738179 | orchestrator | 2026-03-11 00:49:06 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:06.738286 | orchestrator | 2026-03-11 00:49:06 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:06.738948 | orchestrator | 2026-03-11 00:49:06 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:06.739525 | orchestrator | 2026-03-11 00:49:06 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:06.742468 | orchestrator | 2026-03-11 00:49:06 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:49:06.742549 | orchestrator | 2026-03-11 00:49:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:09.771875 | orchestrator | 2026-03-11 00:49:09 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:09.772850 | orchestrator | 2026-03-11 00:49:09 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:09.773504 | orchestrator | 2026-03-11 00:49:09 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:09.774596 | orchestrator | 2026-03-11 00:49:09 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:09.776795 | orchestrator | 2026-03-11 00:49:09 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:49:09.776824 | orchestrator | 2026-03-11 00:49:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:12.804903 | orchestrator | 2026-03-11 00:49:12 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:12.806183 | orchestrator | 2026-03-11 00:49:12 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:12.807426 | orchestrator | 2026-03-11 00:49:12 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:12.809192 | orchestrator | 2026-03-11 00:49:12 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:12.810821 | orchestrator | 2026-03-11 00:49:12 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:49:12.810856 | orchestrator | 2026-03-11 00:49:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:15.843945 | orchestrator | 2026-03-11 00:49:15 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:15.845306 | orchestrator | 2026-03-11 00:49:15 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:15.845740 | orchestrator | 2026-03-11 00:49:15 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:15.848478 | orchestrator | 2026-03-11 00:49:15 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:15.849178 | orchestrator | 2026-03-11 00:49:15 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state STARTED 2026-03-11 00:49:15.849735 | orchestrator | 2026-03-11 00:49:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:18.888926 | orchestrator | 2026-03-11 00:49:18 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:18.889002 | orchestrator | 2026-03-11 00:49:18 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:18.890124 | orchestrator | 2026-03-11 00:49:18 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:18.890819 | orchestrator | 2026-03-11 00:49:18 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:18.891991 | orchestrator | 2026-03-11 00:49:18 | INFO  | Task 1453de7d-7b89-43e8-aec8-3014a4466f90 is in state SUCCESS 2026-03-11 00:49:18.894162 | orchestrator | 2026-03-11 00:49:18.894216 | orchestrator | 2026-03-11 00:49:18.894225 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:49:18.894232 | orchestrator | 2026-03-11 00:49:18.894239 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:49:18.894245 | orchestrator | Wednesday 11 March 2026 00:48:17 +0000 (0:00:00.325) 0:00:00.325 ******* 2026-03-11 00:49:18.894251 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:49:18.894259 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:49:18.894265 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:49:18.894271 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:49:18.894277 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:49:18.894283 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:49:18.894289 | orchestrator | 2026-03-11 00:49:18.894295 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:49:18.894302 | orchestrator | Wednesday 11 March 2026 00:48:18 +0000 (0:00:01.238) 0:00:01.563 ******* 2026-03-11 00:49:18.894308 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-11 00:49:18.894315 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-11 00:49:18.894320 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-11 00:49:18.894326 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-11 00:49:18.894332 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-11 00:49:18.894338 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2026-03-11 00:49:18.894344 | orchestrator | 2026-03-11 00:49:18.894351 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2026-03-11 00:49:18.894357 | orchestrator | 2026-03-11 00:49:18.894363 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2026-03-11 00:49:18.894369 | orchestrator | Wednesday 11 March 2026 00:48:19 +0000 (0:00:01.312) 0:00:02.876 ******* 2026-03-11 00:49:18.894377 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:49:18.894383 | orchestrator | 2026-03-11 00:49:18.894387 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-11 00:49:18.894391 | orchestrator | Wednesday 11 March 2026 00:48:22 +0000 (0:00:02.025) 0:00:04.901 ******* 2026-03-11 00:49:18.894395 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-11 00:49:18.894399 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-11 00:49:18.894404 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-11 00:49:18.894410 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-11 00:49:18.894416 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-11 00:49:18.894422 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-11 00:49:18.894444 | orchestrator | 2026-03-11 00:49:18.894450 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-11 00:49:18.894455 | orchestrator | Wednesday 11 March 2026 00:48:24 +0000 (0:00:02.438) 0:00:07.340 ******* 2026-03-11 00:49:18.894462 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2026-03-11 00:49:18.894468 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2026-03-11 00:49:18.894474 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2026-03-11 00:49:18.894481 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2026-03-11 00:49:18.894487 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2026-03-11 00:49:18.894493 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2026-03-11 00:49:18.894499 | orchestrator | 2026-03-11 00:49:18.894505 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-11 00:49:18.894509 | orchestrator | Wednesday 11 March 2026 00:48:26 +0000 (0:00:01.712) 0:00:09.052 ******* 2026-03-11 00:49:18.894513 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2026-03-11 00:49:18.894517 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:49:18.894521 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2026-03-11 00:49:18.894525 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:49:18.894529 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2026-03-11 00:49:18.894533 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:49:18.894536 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2026-03-11 00:49:18.894540 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2026-03-11 00:49:18.894544 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:49:18.894547 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:49:18.894551 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2026-03-11 00:49:18.894555 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:49:18.894559 | orchestrator | 2026-03-11 00:49:18.894563 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2026-03-11 00:49:18.894567 | orchestrator | Wednesday 11 March 2026 00:48:27 +0000 (0:00:01.342) 0:00:10.395 ******* 2026-03-11 00:49:18.894571 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:49:18.894574 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:49:18.894578 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:49:18.894582 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:49:18.894586 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:49:18.894589 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:49:18.894593 | orchestrator | 2026-03-11 00:49:18.894597 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2026-03-11 00:49:18.894601 | orchestrator | Wednesday 11 March 2026 00:48:28 +0000 (0:00:00.856) 0:00:11.252 ******* 2026-03-11 00:49:18.894622 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894637 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894644 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894650 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894657 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894670 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894678 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894696 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894703 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894721 | orchestrator | 2026-03-11 00:49:18.894725 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2026-03-11 00:49:18.894729 | orchestrator | Wednesday 11 March 2026 00:48:30 +0000 (0:00:01.973) 0:00:13.225 ******* 2026-03-11 00:49:18.894733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894746 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894751 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894755 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894791 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894809 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894815 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894828 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894840 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894848 | orchestrator | 2026-03-11 00:49:18.894852 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2026-03-11 00:49:18.894857 | orchestrator | Wednesday 11 March 2026 00:48:34 +0000 (0:00:03.946) 0:00:17.172 ******* 2026-03-11 00:49:18.894861 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:49:18.894865 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:49:18.894870 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:49:18.894874 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:49:18.894878 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:49:18.894882 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:49:18.894887 | orchestrator | 2026-03-11 00:49:18.894891 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2026-03-11 00:49:18.894895 | orchestrator | Wednesday 11 March 2026 00:48:35 +0000 (0:00:01.295) 0:00:18.468 ******* 2026-03-11 00:49:18.894900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894904 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894926 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894940 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894954 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894970 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894975 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2026-03-11 00:49:18.894980 | orchestrator | 2026-03-11 00:49:18.894984 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-11 00:49:18.894988 | orchestrator | Wednesday 11 March 2026 00:48:38 +0000 (0:00:03.342) 0:00:21.810 ******* 2026-03-11 00:49:18.894993 | orchestrator | 2026-03-11 00:49:18.894997 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-11 00:49:18.895001 | orchestrator | Wednesday 11 March 2026 00:48:39 +0000 (0:00:00.288) 0:00:22.098 ******* 2026-03-11 00:49:18.895005 | orchestrator | 2026-03-11 00:49:18.895010 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-11 00:49:18.895014 | orchestrator | Wednesday 11 March 2026 00:48:39 +0000 (0:00:00.110) 0:00:22.209 ******* 2026-03-11 00:49:18.895018 | orchestrator | 2026-03-11 00:49:18.895023 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-11 00:49:18.895027 | orchestrator | Wednesday 11 March 2026 00:48:39 +0000 (0:00:00.101) 0:00:22.311 ******* 2026-03-11 00:49:18.895031 | orchestrator | 2026-03-11 00:49:18.895036 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-11 00:49:18.895040 | orchestrator | Wednesday 11 March 2026 00:48:39 +0000 (0:00:00.099) 0:00:22.410 ******* 2026-03-11 00:49:18.895044 | orchestrator | 2026-03-11 00:49:18.895049 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2026-03-11 00:49:18.895053 | orchestrator | Wednesday 11 March 2026 00:48:39 +0000 (0:00:00.211) 0:00:22.622 ******* 2026-03-11 00:49:18.895057 | orchestrator | 2026-03-11 00:49:18.895061 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2026-03-11 00:49:18.895066 | orchestrator | Wednesday 11 March 2026 00:48:39 +0000 (0:00:00.175) 0:00:22.797 ******* 2026-03-11 00:49:18.895070 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:49:18.895074 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:49:18.895078 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:49:18.895083 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:49:18.895087 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:49:18.895092 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:49:18.895096 | orchestrator | 2026-03-11 00:49:18.895100 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2026-03-11 00:49:18.895105 | orchestrator | Wednesday 11 March 2026 00:48:49 +0000 (0:00:09.537) 0:00:32.334 ******* 2026-03-11 00:49:18.895114 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:49:18.895117 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:49:18.895121 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:49:18.895125 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:49:18.895128 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:49:18.895132 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:49:18.895136 | orchestrator | 2026-03-11 00:49:18.895140 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-11 00:49:18.895143 | orchestrator | Wednesday 11 March 2026 00:48:50 +0000 (0:00:01.499) 0:00:33.834 ******* 2026-03-11 00:49:18.895147 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:49:18.895151 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:49:18.895155 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:49:18.895158 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:49:18.895162 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:49:18.895175 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:49:18.895179 | orchestrator | 2026-03-11 00:49:18.895189 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2026-03-11 00:49:18.895193 | orchestrator | Wednesday 11 March 2026 00:48:56 +0000 (0:00:05.896) 0:00:39.731 ******* 2026-03-11 00:49:18.895197 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2026-03-11 00:49:18.895201 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2026-03-11 00:49:18.895205 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2026-03-11 00:49:18.895209 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2026-03-11 00:49:18.895212 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2026-03-11 00:49:18.895221 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2026-03-11 00:49:18.895225 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2026-03-11 00:49:18.895229 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2026-03-11 00:49:18.895232 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2026-03-11 00:49:18.895236 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2026-03-11 00:49:18.895240 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2026-03-11 00:49:18.895244 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2026-03-11 00:49:18.895247 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-11 00:49:18.895251 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-11 00:49:18.895255 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-11 00:49:18.895259 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-11 00:49:18.895262 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-11 00:49:18.895266 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2026-03-11 00:49:18.895270 | orchestrator | 2026-03-11 00:49:18.895274 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2026-03-11 00:49:18.895281 | orchestrator | Wednesday 11 March 2026 00:49:04 +0000 (0:00:07.884) 0:00:47.615 ******* 2026-03-11 00:49:18.895285 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2026-03-11 00:49:18.895289 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:49:18.895293 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2026-03-11 00:49:18.895297 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:49:18.895301 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2026-03-11 00:49:18.895304 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:49:18.895308 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2026-03-11 00:49:18.895312 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2026-03-11 00:49:18.895316 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2026-03-11 00:49:18.895320 | orchestrator | 2026-03-11 00:49:18.895324 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2026-03-11 00:49:18.895327 | orchestrator | Wednesday 11 March 2026 00:49:06 +0000 (0:00:02.247) 0:00:49.863 ******* 2026-03-11 00:49:18.895331 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2026-03-11 00:49:18.895335 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:49:18.895339 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2026-03-11 00:49:18.895343 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:49:18.895346 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2026-03-11 00:49:18.895350 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:49:18.895354 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2026-03-11 00:49:18.895358 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2026-03-11 00:49:18.895362 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2026-03-11 00:49:18.895365 | orchestrator | 2026-03-11 00:49:18.895369 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2026-03-11 00:49:18.895373 | orchestrator | Wednesday 11 March 2026 00:49:09 +0000 (0:00:02.908) 0:00:52.772 ******* 2026-03-11 00:49:18.895377 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:49:18.895380 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:49:18.895384 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:49:18.895388 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:49:18.895392 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:49:18.895395 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:49:18.895399 | orchestrator | 2026-03-11 00:49:18.895403 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:49:18.895407 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:49:18.895411 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:49:18.895415 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:49:18.895418 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 00:49:18.895422 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 00:49:18.895431 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 00:49:18.895435 | orchestrator | 2026-03-11 00:49:18.895439 | orchestrator | 2026-03-11 00:49:18.895443 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:49:18.895447 | orchestrator | Wednesday 11 March 2026 00:49:17 +0000 (0:00:07.782) 0:01:00.555 ******* 2026-03-11 00:49:18.895451 | orchestrator | =============================================================================== 2026-03-11 00:49:18.895463 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 13.68s 2026-03-11 00:49:18.895469 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------- 9.54s 2026-03-11 00:49:18.895478 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.88s 2026-03-11 00:49:18.895485 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.95s 2026-03-11 00:49:18.895491 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 3.34s 2026-03-11 00:49:18.895497 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 2.91s 2026-03-11 00:49:18.895503 | orchestrator | module-load : Load modules ---------------------------------------------- 2.44s 2026-03-11 00:49:18.895509 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.25s 2026-03-11 00:49:18.895516 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.03s 2026-03-11 00:49:18.895521 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.97s 2026-03-11 00:49:18.895527 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.71s 2026-03-11 00:49:18.895533 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.50s 2026-03-11 00:49:18.895539 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.34s 2026-03-11 00:49:18.895545 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.31s 2026-03-11 00:49:18.895550 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.30s 2026-03-11 00:49:18.895555 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.24s 2026-03-11 00:49:18.895560 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 0.99s 2026-03-11 00:49:18.895566 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.86s 2026-03-11 00:49:18.895572 | orchestrator | 2026-03-11 00:49:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:21.930232 | orchestrator | 2026-03-11 00:49:21 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:49:21.930313 | orchestrator | 2026-03-11 00:49:21 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:21.930321 | orchestrator | 2026-03-11 00:49:21 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:21.930945 | orchestrator | 2026-03-11 00:49:21 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:21.932055 | orchestrator | 2026-03-11 00:49:21 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:21.932121 | orchestrator | 2026-03-11 00:49:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:24.957717 | orchestrator | 2026-03-11 00:49:24 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:49:24.958085 | orchestrator | 2026-03-11 00:49:24 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:24.958851 | orchestrator | 2026-03-11 00:49:24 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:24.959603 | orchestrator | 2026-03-11 00:49:24 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:24.960539 | orchestrator | 2026-03-11 00:49:24 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:24.960577 | orchestrator | 2026-03-11 00:49:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:27.989827 | orchestrator | 2026-03-11 00:49:27 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:49:27.993852 | orchestrator | 2026-03-11 00:49:27 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:27.996808 | orchestrator | 2026-03-11 00:49:27 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:28.007771 | orchestrator | 2026-03-11 00:49:28 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:28.011724 | orchestrator | 2026-03-11 00:49:28 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:28.011806 | orchestrator | 2026-03-11 00:49:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:31.044542 | orchestrator | 2026-03-11 00:49:31 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:49:31.047666 | orchestrator | 2026-03-11 00:49:31 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:31.048271 | orchestrator | 2026-03-11 00:49:31 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:31.048830 | orchestrator | 2026-03-11 00:49:31 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:31.049479 | orchestrator | 2026-03-11 00:49:31 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:31.049512 | orchestrator | 2026-03-11 00:49:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:34.076217 | orchestrator | 2026-03-11 00:49:34 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:49:34.076277 | orchestrator | 2026-03-11 00:49:34 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:34.076355 | orchestrator | 2026-03-11 00:49:34 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:34.077249 | orchestrator | 2026-03-11 00:49:34 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:34.078865 | orchestrator | 2026-03-11 00:49:34 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:34.078911 | orchestrator | 2026-03-11 00:49:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:37.144968 | orchestrator | 2026-03-11 00:49:37 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:49:37.147474 | orchestrator | 2026-03-11 00:49:37 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:37.147533 | orchestrator | 2026-03-11 00:49:37 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:37.150204 | orchestrator | 2026-03-11 00:49:37 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:37.152441 | orchestrator | 2026-03-11 00:49:37 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:37.152498 | orchestrator | 2026-03-11 00:49:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:40.187226 | orchestrator | 2026-03-11 00:49:40 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:49:40.187515 | orchestrator | 2026-03-11 00:49:40 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:40.188141 | orchestrator | 2026-03-11 00:49:40 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:40.189211 | orchestrator | 2026-03-11 00:49:40 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:40.190068 | orchestrator | 2026-03-11 00:49:40 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:40.190096 | orchestrator | 2026-03-11 00:49:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:43.219886 | orchestrator | 2026-03-11 00:49:43 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:49:43.221455 | orchestrator | 2026-03-11 00:49:43 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:43.223017 | orchestrator | 2026-03-11 00:49:43 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:43.225401 | orchestrator | 2026-03-11 00:49:43 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:43.227642 | orchestrator | 2026-03-11 00:49:43 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:43.227690 | orchestrator | 2026-03-11 00:49:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:46.266856 | orchestrator | 2026-03-11 00:49:46 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:49:46.266914 | orchestrator | 2026-03-11 00:49:46 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:46.269548 | orchestrator | 2026-03-11 00:49:46 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:46.272468 | orchestrator | 2026-03-11 00:49:46 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:46.275569 | orchestrator | 2026-03-11 00:49:46 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:46.275658 | orchestrator | 2026-03-11 00:49:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:49.379608 | orchestrator | 2026-03-11 00:49:49 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:49:49.379663 | orchestrator | 2026-03-11 00:49:49 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:49.379671 | orchestrator | 2026-03-11 00:49:49 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:49.381538 | orchestrator | 2026-03-11 00:49:49 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:49.381580 | orchestrator | 2026-03-11 00:49:49 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:49.382626 | orchestrator | 2026-03-11 00:49:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:52.450535 | orchestrator | 2026-03-11 00:49:52 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:49:52.450632 | orchestrator | 2026-03-11 00:49:52 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:52.450641 | orchestrator | 2026-03-11 00:49:52 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:52.450650 | orchestrator | 2026-03-11 00:49:52 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:52.450657 | orchestrator | 2026-03-11 00:49:52 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:52.450665 | orchestrator | 2026-03-11 00:49:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:55.512042 | orchestrator | 2026-03-11 00:49:55 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:49:55.512466 | orchestrator | 2026-03-11 00:49:55 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:55.514101 | orchestrator | 2026-03-11 00:49:55 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:55.516379 | orchestrator | 2026-03-11 00:49:55 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:55.518190 | orchestrator | 2026-03-11 00:49:55 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:55.518228 | orchestrator | 2026-03-11 00:49:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:49:58.708386 | orchestrator | 2026-03-11 00:49:58 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:49:58.709079 | orchestrator | 2026-03-11 00:49:58 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:49:58.709919 | orchestrator | 2026-03-11 00:49:58 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:49:58.710779 | orchestrator | 2026-03-11 00:49:58 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:49:58.711712 | orchestrator | 2026-03-11 00:49:58 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:49:58.711755 | orchestrator | 2026-03-11 00:49:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:01.784077 | orchestrator | 2026-03-11 00:50:01 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:01.784142 | orchestrator | 2026-03-11 00:50:01 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:01.784150 | orchestrator | 2026-03-11 00:50:01 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state STARTED 2026-03-11 00:50:01.784155 | orchestrator | 2026-03-11 00:50:01 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:01.784160 | orchestrator | 2026-03-11 00:50:01 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:01.784166 | orchestrator | 2026-03-11 00:50:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:04.816954 | orchestrator | 2026-03-11 00:50:04 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:04.818305 | orchestrator | 2026-03-11 00:50:04 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:04.820864 | orchestrator | 2026-03-11 00:50:04 | INFO  | Task ae377ac9-00e2-4c69-8135-6481580c9f9f is in state SUCCESS 2026-03-11 00:50:04.822642 | orchestrator | 2026-03-11 00:50:04.822698 | orchestrator | 2026-03-11 00:50:04.822707 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2026-03-11 00:50:04.822713 | orchestrator | 2026-03-11 00:50:04.822718 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2026-03-11 00:50:04.822733 | orchestrator | Wednesday 11 March 2026 00:45:38 +0000 (0:00:00.243) 0:00:00.243 ******* 2026-03-11 00:50:04.822739 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:04.822745 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:04.822750 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:04.822756 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.822761 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.822768 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.822775 | orchestrator | 2026-03-11 00:50:04.822780 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2026-03-11 00:50:04.822785 | orchestrator | Wednesday 11 March 2026 00:45:38 +0000 (0:00:00.631) 0:00:00.875 ******* 2026-03-11 00:50:04.822790 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.822796 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.822802 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.822806 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.822809 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.822813 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.822816 | orchestrator | 2026-03-11 00:50:04.822819 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2026-03-11 00:50:04.822822 | orchestrator | Wednesday 11 March 2026 00:45:39 +0000 (0:00:00.637) 0:00:01.512 ******* 2026-03-11 00:50:04.822887 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.822891 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.822894 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.822897 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.822901 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.822904 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.822907 | orchestrator | 2026-03-11 00:50:04.822910 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2026-03-11 00:50:04.822913 | orchestrator | Wednesday 11 March 2026 00:45:40 +0000 (0:00:00.653) 0:00:02.166 ******* 2026-03-11 00:50:04.822916 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:04.822919 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:04.822922 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:04.822925 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.822928 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.822931 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.822934 | orchestrator | 2026-03-11 00:50:04.822937 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2026-03-11 00:50:04.822940 | orchestrator | Wednesday 11 March 2026 00:45:42 +0000 (0:00:02.546) 0:00:04.712 ******* 2026-03-11 00:50:04.822944 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:04.822947 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:04.822950 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:04.822953 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.822956 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.822959 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.822962 | orchestrator | 2026-03-11 00:50:04.822965 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2026-03-11 00:50:04.822968 | orchestrator | Wednesday 11 March 2026 00:45:43 +0000 (0:00:01.248) 0:00:05.961 ******* 2026-03-11 00:50:04.822971 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:04.822974 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:04.822977 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:04.822981 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.822984 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.822987 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.822990 | orchestrator | 2026-03-11 00:50:04.822993 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2026-03-11 00:50:04.822996 | orchestrator | Wednesday 11 March 2026 00:45:44 +0000 (0:00:01.007) 0:00:06.968 ******* 2026-03-11 00:50:04.823000 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.823003 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.823006 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.823009 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.823012 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.823015 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.823018 | orchestrator | 2026-03-11 00:50:04.823021 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2026-03-11 00:50:04.823024 | orchestrator | Wednesday 11 March 2026 00:45:45 +0000 (0:00:01.060) 0:00:08.028 ******* 2026-03-11 00:50:04.823027 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.823030 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.823033 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.823036 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.823039 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.823042 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.823045 | orchestrator | 2026-03-11 00:50:04.823048 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2026-03-11 00:50:04.823052 | orchestrator | Wednesday 11 March 2026 00:45:46 +0000 (0:00:01.039) 0:00:09.067 ******* 2026-03-11 00:50:04.823055 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 00:50:04.823061 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 00:50:04.823064 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.823067 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 00:50:04.823070 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 00:50:04.823073 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 00:50:04.823076 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 00:50:04.823079 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.823082 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 00:50:04.823086 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 00:50:04.823097 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.823101 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 00:50:04.823104 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 00:50:04.823107 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.823113 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.823117 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 00:50:04.823120 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 00:50:04.823123 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.823126 | orchestrator | 2026-03-11 00:50:04.823129 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2026-03-11 00:50:04.823132 | orchestrator | Wednesday 11 March 2026 00:45:48 +0000 (0:00:01.750) 0:00:10.818 ******* 2026-03-11 00:50:04.823135 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.823138 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.823141 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.823144 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.823147 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.823150 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.823153 | orchestrator | 2026-03-11 00:50:04.823156 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2026-03-11 00:50:04.823160 | orchestrator | Wednesday 11 March 2026 00:45:49 +0000 (0:00:01.296) 0:00:12.115 ******* 2026-03-11 00:50:04.823163 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:04.823166 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:04.823169 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.823172 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:04.823175 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.823178 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.823181 | orchestrator | 2026-03-11 00:50:04.823184 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2026-03-11 00:50:04.823188 | orchestrator | Wednesday 11 March 2026 00:45:51 +0000 (0:00:01.250) 0:00:13.365 ******* 2026-03-11 00:50:04.823191 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.823194 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:04.823197 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.823200 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:04.823203 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:04.823206 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.823209 | orchestrator | 2026-03-11 00:50:04.823212 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2026-03-11 00:50:04.823215 | orchestrator | Wednesday 11 March 2026 00:45:56 +0000 (0:00:05.304) 0:00:18.670 ******* 2026-03-11 00:50:04.823219 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.823224 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.823229 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.823234 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.823244 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.823249 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.823254 | orchestrator | 2026-03-11 00:50:04.823258 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2026-03-11 00:50:04.823263 | orchestrator | Wednesday 11 March 2026 00:45:59 +0000 (0:00:03.210) 0:00:21.880 ******* 2026-03-11 00:50:04.823268 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.823274 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.823279 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.823284 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.823290 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.823295 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.823300 | orchestrator | 2026-03-11 00:50:04.823305 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2026-03-11 00:50:04.823312 | orchestrator | Wednesday 11 March 2026 00:46:03 +0000 (0:00:03.948) 0:00:25.829 ******* 2026-03-11 00:50:04.823317 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.823323 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.823330 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.823334 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.823337 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.823341 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.823345 | orchestrator | 2026-03-11 00:50:04.823349 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2026-03-11 00:50:04.823352 | orchestrator | Wednesday 11 March 2026 00:46:04 +0000 (0:00:00.833) 0:00:26.662 ******* 2026-03-11 00:50:04.823356 | orchestrator | skipping: [testbed-node-3] => (item=rancher)  2026-03-11 00:50:04.823360 | orchestrator | skipping: [testbed-node-3] => (item=rancher/k3s)  2026-03-11 00:50:04.823364 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.823368 | orchestrator | skipping: [testbed-node-4] => (item=rancher)  2026-03-11 00:50:04.823372 | orchestrator | skipping: [testbed-node-4] => (item=rancher/k3s)  2026-03-11 00:50:04.823375 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.823379 | orchestrator | skipping: [testbed-node-5] => (item=rancher)  2026-03-11 00:50:04.823383 | orchestrator | skipping: [testbed-node-5] => (item=rancher/k3s)  2026-03-11 00:50:04.823386 | orchestrator | skipping: [testbed-node-0] => (item=rancher)  2026-03-11 00:50:04.823390 | orchestrator | skipping: [testbed-node-0] => (item=rancher/k3s)  2026-03-11 00:50:04.823394 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.823397 | orchestrator | skipping: [testbed-node-1] => (item=rancher)  2026-03-11 00:50:04.823401 | orchestrator | skipping: [testbed-node-1] => (item=rancher/k3s)  2026-03-11 00:50:04.823405 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.823408 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.823412 | orchestrator | skipping: [testbed-node-2] => (item=rancher)  2026-03-11 00:50:04.823440 | orchestrator | skipping: [testbed-node-2] => (item=rancher/k3s)  2026-03-11 00:50:04.823443 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.823446 | orchestrator | 2026-03-11 00:50:04.823450 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2026-03-11 00:50:04.823457 | orchestrator | Wednesday 11 March 2026 00:46:05 +0000 (0:00:01.023) 0:00:27.685 ******* 2026-03-11 00:50:04.823460 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.823463 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.823468 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.823473 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.823480 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.823491 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.823496 | orchestrator | 2026-03-11 00:50:04.823501 | orchestrator | TASK [k3s_custom_registries : Remove /etc/rancher/k3s/registries.yaml when no registries configured] *** 2026-03-11 00:50:04.823506 | orchestrator | Wednesday 11 March 2026 00:46:06 +0000 (0:00:00.690) 0:00:28.376 ******* 2026-03-11 00:50:04.823516 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.823521 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.823526 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.823531 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.823535 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.823539 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.823544 | orchestrator | 2026-03-11 00:50:04.823549 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2026-03-11 00:50:04.823553 | orchestrator | 2026-03-11 00:50:04.823558 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2026-03-11 00:50:04.823562 | orchestrator | Wednesday 11 March 2026 00:46:07 +0000 (0:00:01.661) 0:00:30.038 ******* 2026-03-11 00:50:04.823567 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.823572 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.823577 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.823582 | orchestrator | 2026-03-11 00:50:04.823587 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2026-03-11 00:50:04.823592 | orchestrator | Wednesday 11 March 2026 00:46:09 +0000 (0:00:01.946) 0:00:31.984 ******* 2026-03-11 00:50:04.823596 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.823601 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.823606 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.823611 | orchestrator | 2026-03-11 00:50:04.823616 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2026-03-11 00:50:04.823620 | orchestrator | Wednesday 11 March 2026 00:46:11 +0000 (0:00:01.538) 0:00:33.523 ******* 2026-03-11 00:50:04.823625 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.823630 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.823635 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.823640 | orchestrator | 2026-03-11 00:50:04.823645 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2026-03-11 00:50:04.823650 | orchestrator | Wednesday 11 March 2026 00:46:12 +0000 (0:00:01.067) 0:00:34.590 ******* 2026-03-11 00:50:04.823655 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.823660 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.823664 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.823670 | orchestrator | 2026-03-11 00:50:04.823676 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2026-03-11 00:50:04.823681 | orchestrator | Wednesday 11 March 2026 00:46:14 +0000 (0:00:01.867) 0:00:36.458 ******* 2026-03-11 00:50:04.823686 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.823691 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.823696 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.823701 | orchestrator | 2026-03-11 00:50:04.823706 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2026-03-11 00:50:04.823711 | orchestrator | Wednesday 11 March 2026 00:46:15 +0000 (0:00:00.894) 0:00:37.353 ******* 2026-03-11 00:50:04.823716 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.823721 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.823726 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.823731 | orchestrator | 2026-03-11 00:50:04.823888 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2026-03-11 00:50:04.823894 | orchestrator | Wednesday 11 March 2026 00:46:16 +0000 (0:00:00.944) 0:00:38.297 ******* 2026-03-11 00:50:04.823900 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.823905 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.823910 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.823933 | orchestrator | 2026-03-11 00:50:04.823939 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2026-03-11 00:50:04.823944 | orchestrator | Wednesday 11 March 2026 00:46:17 +0000 (0:00:01.608) 0:00:39.906 ******* 2026-03-11 00:50:04.823949 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:50:04.823954 | orchestrator | 2026-03-11 00:50:04.823966 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2026-03-11 00:50:04.823971 | orchestrator | Wednesday 11 March 2026 00:46:18 +0000 (0:00:00.534) 0:00:40.441 ******* 2026-03-11 00:50:04.823976 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.823982 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.823986 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.823991 | orchestrator | 2026-03-11 00:50:04.823995 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2026-03-11 00:50:04.824000 | orchestrator | Wednesday 11 March 2026 00:46:21 +0000 (0:00:03.223) 0:00:43.664 ******* 2026-03-11 00:50:04.824004 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.824009 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.824013 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.824018 | orchestrator | 2026-03-11 00:50:04.824022 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2026-03-11 00:50:04.824027 | orchestrator | Wednesday 11 March 2026 00:46:22 +0000 (0:00:00.856) 0:00:44.520 ******* 2026-03-11 00:50:04.824032 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.824037 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.824041 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.824045 | orchestrator | 2026-03-11 00:50:04.824050 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2026-03-11 00:50:04.824055 | orchestrator | Wednesday 11 March 2026 00:46:23 +0000 (0:00:01.121) 0:00:45.642 ******* 2026-03-11 00:50:04.824060 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.824064 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.824069 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.824074 | orchestrator | 2026-03-11 00:50:04.824079 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2026-03-11 00:50:04.824091 | orchestrator | Wednesday 11 March 2026 00:46:25 +0000 (0:00:02.096) 0:00:47.738 ******* 2026-03-11 00:50:04.824096 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.824101 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.824106 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.824111 | orchestrator | 2026-03-11 00:50:04.824120 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2026-03-11 00:50:04.824127 | orchestrator | Wednesday 11 March 2026 00:46:26 +0000 (0:00:00.657) 0:00:48.397 ******* 2026-03-11 00:50:04.824131 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.824136 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.824141 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.824146 | orchestrator | 2026-03-11 00:50:04.824151 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2026-03-11 00:50:04.824156 | orchestrator | Wednesday 11 March 2026 00:46:26 +0000 (0:00:00.409) 0:00:48.806 ******* 2026-03-11 00:50:04.824162 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.824167 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.824172 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.824176 | orchestrator | 2026-03-11 00:50:04.824182 | orchestrator | TASK [k3s_server : Detect Kubernetes version for label compatibility] ********** 2026-03-11 00:50:04.824187 | orchestrator | Wednesday 11 March 2026 00:46:28 +0000 (0:00:01.791) 0:00:50.598 ******* 2026-03-11 00:50:04.824192 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.824197 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.824203 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.824208 | orchestrator | 2026-03-11 00:50:04.824213 | orchestrator | TASK [k3s_server : Set node role label selector based on Kubernetes version] *** 2026-03-11 00:50:04.824218 | orchestrator | Wednesday 11 March 2026 00:46:30 +0000 (0:00:02.321) 0:00:52.919 ******* 2026-03-11 00:50:04.824223 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.824228 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.824232 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.824238 | orchestrator | 2026-03-11 00:50:04.824243 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2026-03-11 00:50:04.824255 | orchestrator | Wednesday 11 March 2026 00:46:31 +0000 (0:00:00.702) 0:00:53.622 ******* 2026-03-11 00:50:04.824260 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-11 00:50:04.824266 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-11 00:50:04.824271 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2026-03-11 00:50:04.824276 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-11 00:50:04.824281 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-11 00:50:04.824286 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2026-03-11 00:50:04.824291 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-11 00:50:04.824296 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-11 00:50:04.824301 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2026-03-11 00:50:04.824307 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-11 00:50:04.824312 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-11 00:50:04.824317 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2026-03-11 00:50:04.824322 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.824327 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.824332 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.824337 | orchestrator | 2026-03-11 00:50:04.824343 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2026-03-11 00:50:04.824348 | orchestrator | Wednesday 11 March 2026 00:47:15 +0000 (0:00:43.501) 0:01:37.124 ******* 2026-03-11 00:50:04.824354 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.824359 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.824365 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.824370 | orchestrator | 2026-03-11 00:50:04.824375 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2026-03-11 00:50:04.824380 | orchestrator | Wednesday 11 March 2026 00:47:15 +0000 (0:00:00.323) 0:01:37.447 ******* 2026-03-11 00:50:04.824386 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.824391 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.824396 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.824402 | orchestrator | 2026-03-11 00:50:04.824407 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2026-03-11 00:50:04.824412 | orchestrator | Wednesday 11 March 2026 00:47:16 +0000 (0:00:01.143) 0:01:38.591 ******* 2026-03-11 00:50:04.824418 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.824423 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.824429 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.824432 | orchestrator | 2026-03-11 00:50:04.824441 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2026-03-11 00:50:04.824444 | orchestrator | Wednesday 11 March 2026 00:47:18 +0000 (0:00:01.668) 0:01:40.259 ******* 2026-03-11 00:50:04.824447 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.824450 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.824457 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.824460 | orchestrator | 2026-03-11 00:50:04.824466 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2026-03-11 00:50:04.824470 | orchestrator | Wednesday 11 March 2026 00:47:42 +0000 (0:00:24.573) 0:02:04.832 ******* 2026-03-11 00:50:04.824473 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.824476 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.824479 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.824482 | orchestrator | 2026-03-11 00:50:04.824485 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2026-03-11 00:50:04.824488 | orchestrator | Wednesday 11 March 2026 00:47:43 +0000 (0:00:00.760) 0:02:05.593 ******* 2026-03-11 00:50:04.824491 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.824495 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.824498 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.824501 | orchestrator | 2026-03-11 00:50:04.824504 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2026-03-11 00:50:04.824507 | orchestrator | Wednesday 11 March 2026 00:47:44 +0000 (0:00:00.670) 0:02:06.263 ******* 2026-03-11 00:50:04.824510 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.824514 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.824518 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.824523 | orchestrator | 2026-03-11 00:50:04.824529 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2026-03-11 00:50:04.824534 | orchestrator | Wednesday 11 March 2026 00:47:44 +0000 (0:00:00.654) 0:02:06.918 ******* 2026-03-11 00:50:04.824539 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.824545 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.824550 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.824556 | orchestrator | 2026-03-11 00:50:04.824561 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2026-03-11 00:50:04.824567 | orchestrator | Wednesday 11 March 2026 00:47:45 +0000 (0:00:00.922) 0:02:07.840 ******* 2026-03-11 00:50:04.824572 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.824577 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.824582 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.824587 | orchestrator | 2026-03-11 00:50:04.824593 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2026-03-11 00:50:04.824598 | orchestrator | Wednesday 11 March 2026 00:47:46 +0000 (0:00:00.282) 0:02:08.122 ******* 2026-03-11 00:50:04.824603 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.824609 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.824614 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.824619 | orchestrator | 2026-03-11 00:50:04.824625 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2026-03-11 00:50:04.824630 | orchestrator | Wednesday 11 March 2026 00:47:46 +0000 (0:00:00.635) 0:02:08.758 ******* 2026-03-11 00:50:04.824636 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.824641 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.824646 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.824651 | orchestrator | 2026-03-11 00:50:04.824657 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2026-03-11 00:50:04.824662 | orchestrator | Wednesday 11 March 2026 00:47:47 +0000 (0:00:00.628) 0:02:09.387 ******* 2026-03-11 00:50:04.824667 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.824673 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.824678 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.824683 | orchestrator | 2026-03-11 00:50:04.824688 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2026-03-11 00:50:04.824694 | orchestrator | Wednesday 11 March 2026 00:47:48 +0000 (0:00:01.096) 0:02:10.484 ******* 2026-03-11 00:50:04.824699 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:04.824705 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:04.824711 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:04.824720 | orchestrator | 2026-03-11 00:50:04.824725 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2026-03-11 00:50:04.824728 | orchestrator | Wednesday 11 March 2026 00:47:49 +0000 (0:00:00.831) 0:02:11.315 ******* 2026-03-11 00:50:04.824732 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.824735 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.824739 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.824743 | orchestrator | 2026-03-11 00:50:04.824748 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2026-03-11 00:50:04.824753 | orchestrator | Wednesday 11 March 2026 00:47:49 +0000 (0:00:00.333) 0:02:11.649 ******* 2026-03-11 00:50:04.824761 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.824767 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.824772 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.824777 | orchestrator | 2026-03-11 00:50:04.824782 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2026-03-11 00:50:04.824787 | orchestrator | Wednesday 11 March 2026 00:47:49 +0000 (0:00:00.303) 0:02:11.952 ******* 2026-03-11 00:50:04.824792 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.824798 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.824804 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.824809 | orchestrator | 2026-03-11 00:50:04.824814 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2026-03-11 00:50:04.824819 | orchestrator | Wednesday 11 March 2026 00:47:50 +0000 (0:00:00.897) 0:02:12.849 ******* 2026-03-11 00:50:04.824824 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.824827 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.824830 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.824861 | orchestrator | 2026-03-11 00:50:04.824866 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2026-03-11 00:50:04.824869 | orchestrator | Wednesday 11 March 2026 00:47:51 +0000 (0:00:00.649) 0:02:13.499 ******* 2026-03-11 00:50:04.824873 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-11 00:50:04.824880 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-11 00:50:04.824884 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2026-03-11 00:50:04.824887 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-11 00:50:04.824890 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-11 00:50:04.824893 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2026-03-11 00:50:04.824896 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-11 00:50:04.824900 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-11 00:50:04.824903 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2026-03-11 00:50:04.824906 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2026-03-11 00:50:04.824909 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-11 00:50:04.824912 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-11 00:50:04.824916 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-11 00:50:04.824919 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2026-03-11 00:50:04.824922 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-11 00:50:04.824925 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-11 00:50:04.824932 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2026-03-11 00:50:04.824935 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-11 00:50:04.824938 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2026-03-11 00:50:04.824941 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2026-03-11 00:50:04.824944 | orchestrator | 2026-03-11 00:50:04.824947 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2026-03-11 00:50:04.824950 | orchestrator | 2026-03-11 00:50:04.824953 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2026-03-11 00:50:04.824956 | orchestrator | Wednesday 11 March 2026 00:47:54 +0000 (0:00:02.965) 0:02:16.464 ******* 2026-03-11 00:50:04.824959 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:04.824963 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:04.824966 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:04.824969 | orchestrator | 2026-03-11 00:50:04.824972 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2026-03-11 00:50:04.824975 | orchestrator | Wednesday 11 March 2026 00:47:54 +0000 (0:00:00.485) 0:02:16.950 ******* 2026-03-11 00:50:04.824978 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:04.824982 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:04.824985 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:04.824988 | orchestrator | 2026-03-11 00:50:04.824991 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2026-03-11 00:50:04.824994 | orchestrator | Wednesday 11 March 2026 00:47:55 +0000 (0:00:00.612) 0:02:17.563 ******* 2026-03-11 00:50:04.824997 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:04.825000 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:04.825003 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:04.825007 | orchestrator | 2026-03-11 00:50:04.825010 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2026-03-11 00:50:04.825013 | orchestrator | Wednesday 11 March 2026 00:47:55 +0000 (0:00:00.313) 0:02:17.876 ******* 2026-03-11 00:50:04.825018 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:50:04.825023 | orchestrator | 2026-03-11 00:50:04.825029 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2026-03-11 00:50:04.825036 | orchestrator | Wednesday 11 March 2026 00:47:56 +0000 (0:00:00.614) 0:02:18.491 ******* 2026-03-11 00:50:04.825041 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.825046 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.825051 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.825056 | orchestrator | 2026-03-11 00:50:04.825061 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2026-03-11 00:50:04.825066 | orchestrator | Wednesday 11 March 2026 00:47:56 +0000 (0:00:00.295) 0:02:18.786 ******* 2026-03-11 00:50:04.825071 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.825076 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.825080 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.825085 | orchestrator | 2026-03-11 00:50:04.825090 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2026-03-11 00:50:04.825095 | orchestrator | Wednesday 11 March 2026 00:47:56 +0000 (0:00:00.302) 0:02:19.089 ******* 2026-03-11 00:50:04.825100 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.825105 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.825110 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.825115 | orchestrator | 2026-03-11 00:50:04.825120 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2026-03-11 00:50:04.825125 | orchestrator | Wednesday 11 March 2026 00:47:57 +0000 (0:00:00.303) 0:02:19.392 ******* 2026-03-11 00:50:04.825439 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:04.825461 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:04.825472 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:04.825477 | orchestrator | 2026-03-11 00:50:04.825488 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2026-03-11 00:50:04.825494 | orchestrator | Wednesday 11 March 2026 00:47:58 +0000 (0:00:00.944) 0:02:20.336 ******* 2026-03-11 00:50:04.825498 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:04.825503 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:04.825508 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:04.825513 | orchestrator | 2026-03-11 00:50:04.825518 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2026-03-11 00:50:04.825524 | orchestrator | Wednesday 11 March 2026 00:47:59 +0000 (0:00:01.065) 0:02:21.402 ******* 2026-03-11 00:50:04.825529 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:04.825533 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:04.825538 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:04.825543 | orchestrator | 2026-03-11 00:50:04.825550 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2026-03-11 00:50:04.825556 | orchestrator | Wednesday 11 March 2026 00:48:00 +0000 (0:00:01.413) 0:02:22.815 ******* 2026-03-11 00:50:04.825561 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:50:04.825566 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:50:04.825570 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:50:04.825576 | orchestrator | 2026-03-11 00:50:04.825581 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-11 00:50:04.825586 | orchestrator | 2026-03-11 00:50:04.825591 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-11 00:50:04.825596 | orchestrator | Wednesday 11 March 2026 00:48:10 +0000 (0:00:10.249) 0:02:33.065 ******* 2026-03-11 00:50:04.825601 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:04.825606 | orchestrator | 2026-03-11 00:50:04.825611 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-11 00:50:04.825616 | orchestrator | Wednesday 11 March 2026 00:48:11 +0000 (0:00:00.824) 0:02:33.889 ******* 2026-03-11 00:50:04.825621 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:04.825624 | orchestrator | 2026-03-11 00:50:04.825722 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-11 00:50:04.825725 | orchestrator | Wednesday 11 March 2026 00:48:12 +0000 (0:00:00.527) 0:02:34.417 ******* 2026-03-11 00:50:04.825729 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-11 00:50:04.825732 | orchestrator | 2026-03-11 00:50:04.825735 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-11 00:50:04.825738 | orchestrator | Wednesday 11 March 2026 00:48:12 +0000 (0:00:00.619) 0:02:35.036 ******* 2026-03-11 00:50:04.825742 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:04.825747 | orchestrator | 2026-03-11 00:50:04.825753 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-11 00:50:04.825761 | orchestrator | Wednesday 11 March 2026 00:48:13 +0000 (0:00:00.857) 0:02:35.893 ******* 2026-03-11 00:50:04.825765 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:04.825770 | orchestrator | 2026-03-11 00:50:04.825775 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-11 00:50:04.825781 | orchestrator | Wednesday 11 March 2026 00:48:14 +0000 (0:00:00.594) 0:02:36.488 ******* 2026-03-11 00:50:04.825786 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-11 00:50:04.825791 | orchestrator | 2026-03-11 00:50:04.825796 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-11 00:50:04.825801 | orchestrator | Wednesday 11 March 2026 00:48:16 +0000 (0:00:01.635) 0:02:38.123 ******* 2026-03-11 00:50:04.825806 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-11 00:50:04.825811 | orchestrator | 2026-03-11 00:50:04.825814 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-11 00:50:04.825817 | orchestrator | Wednesday 11 March 2026 00:48:16 +0000 (0:00:00.817) 0:02:38.940 ******* 2026-03-11 00:50:04.825825 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:04.825828 | orchestrator | 2026-03-11 00:50:04.825846 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-11 00:50:04.825851 | orchestrator | Wednesday 11 March 2026 00:48:17 +0000 (0:00:01.017) 0:02:39.958 ******* 2026-03-11 00:50:04.825856 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:04.825862 | orchestrator | 2026-03-11 00:50:04.825870 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2026-03-11 00:50:04.825874 | orchestrator | 2026-03-11 00:50:04.825879 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2026-03-11 00:50:04.825884 | orchestrator | Wednesday 11 March 2026 00:48:18 +0000 (0:00:00.715) 0:02:40.674 ******* 2026-03-11 00:50:04.825889 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:04.825893 | orchestrator | 2026-03-11 00:50:04.825897 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2026-03-11 00:50:04.825903 | orchestrator | Wednesday 11 March 2026 00:48:18 +0000 (0:00:00.130) 0:02:40.805 ******* 2026-03-11 00:50:04.825907 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2026-03-11 00:50:04.825913 | orchestrator | 2026-03-11 00:50:04.825918 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2026-03-11 00:50:04.825923 | orchestrator | Wednesday 11 March 2026 00:48:18 +0000 (0:00:00.184) 0:02:40.990 ******* 2026-03-11 00:50:04.825928 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:04.825933 | orchestrator | 2026-03-11 00:50:04.825937 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2026-03-11 00:50:04.825941 | orchestrator | Wednesday 11 March 2026 00:48:19 +0000 (0:00:00.767) 0:02:41.758 ******* 2026-03-11 00:50:04.825946 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:04.825951 | orchestrator | 2026-03-11 00:50:04.825956 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2026-03-11 00:50:04.825960 | orchestrator | Wednesday 11 March 2026 00:48:20 +0000 (0:00:01.169) 0:02:42.927 ******* 2026-03-11 00:50:04.825965 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:04.825970 | orchestrator | 2026-03-11 00:50:04.825978 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2026-03-11 00:50:04.825984 | orchestrator | Wednesday 11 March 2026 00:48:21 +0000 (0:00:00.735) 0:02:43.662 ******* 2026-03-11 00:50:04.825989 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:04.825993 | orchestrator | 2026-03-11 00:50:04.826004 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2026-03-11 00:50:04.826010 | orchestrator | Wednesday 11 March 2026 00:48:21 +0000 (0:00:00.417) 0:02:44.080 ******* 2026-03-11 00:50:04.826056 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:04.826062 | orchestrator | 2026-03-11 00:50:04.826067 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2026-03-11 00:50:04.826072 | orchestrator | Wednesday 11 March 2026 00:48:28 +0000 (0:00:06.411) 0:02:50.492 ******* 2026-03-11 00:50:04.826077 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:04.826082 | orchestrator | 2026-03-11 00:50:04.826087 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2026-03-11 00:50:04.826092 | orchestrator | Wednesday 11 March 2026 00:48:39 +0000 (0:00:11.318) 0:03:01.811 ******* 2026-03-11 00:50:04.826097 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:04.826101 | orchestrator | 2026-03-11 00:50:04.826110 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2026-03-11 00:50:04.826116 | orchestrator | 2026-03-11 00:50:04.826121 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2026-03-11 00:50:04.826127 | orchestrator | Wednesday 11 March 2026 00:48:40 +0000 (0:00:00.629) 0:03:02.440 ******* 2026-03-11 00:50:04.826132 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.826137 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.826145 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.826151 | orchestrator | 2026-03-11 00:50:04.826156 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2026-03-11 00:50:04.826167 | orchestrator | Wednesday 11 March 2026 00:48:40 +0000 (0:00:00.288) 0:03:02.729 ******* 2026-03-11 00:50:04.826173 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.826178 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.826183 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.826188 | orchestrator | 2026-03-11 00:50:04.826193 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2026-03-11 00:50:04.826198 | orchestrator | Wednesday 11 March 2026 00:48:40 +0000 (0:00:00.259) 0:03:02.989 ******* 2026-03-11 00:50:04.826204 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:50:04.826209 | orchestrator | 2026-03-11 00:50:04.826214 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2026-03-11 00:50:04.826219 | orchestrator | Wednesday 11 March 2026 00:48:41 +0000 (0:00:00.667) 0:03:03.656 ******* 2026-03-11 00:50:04.826224 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-11 00:50:04.826229 | orchestrator | 2026-03-11 00:50:04.826234 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2026-03-11 00:50:04.826239 | orchestrator | Wednesday 11 March 2026 00:48:42 +0000 (0:00:00.890) 0:03:04.546 ******* 2026-03-11 00:50:04.826244 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 00:50:04.826250 | orchestrator | 2026-03-11 00:50:04.826255 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2026-03-11 00:50:04.826261 | orchestrator | Wednesday 11 March 2026 00:48:43 +0000 (0:00:00.717) 0:03:05.264 ******* 2026-03-11 00:50:04.826266 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.826272 | orchestrator | 2026-03-11 00:50:04.826277 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2026-03-11 00:50:04.826283 | orchestrator | Wednesday 11 March 2026 00:48:43 +0000 (0:00:00.106) 0:03:05.370 ******* 2026-03-11 00:50:04.826288 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 00:50:04.826293 | orchestrator | 2026-03-11 00:50:04.826298 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2026-03-11 00:50:04.826304 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:00.911) 0:03:06.282 ******* 2026-03-11 00:50:04.826309 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.826314 | orchestrator | 2026-03-11 00:50:04.826319 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2026-03-11 00:50:04.826322 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:00.103) 0:03:06.386 ******* 2026-03-11 00:50:04.826325 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.826328 | orchestrator | 2026-03-11 00:50:04.826333 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2026-03-11 00:50:04.826338 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:00.094) 0:03:06.480 ******* 2026-03-11 00:50:04.826343 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.826348 | orchestrator | 2026-03-11 00:50:04.826353 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2026-03-11 00:50:04.826359 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:00.125) 0:03:06.606 ******* 2026-03-11 00:50:04.826364 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.826370 | orchestrator | 2026-03-11 00:50:04.826375 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2026-03-11 00:50:04.826380 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:00.123) 0:03:06.729 ******* 2026-03-11 00:50:04.826385 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-11 00:50:04.826391 | orchestrator | 2026-03-11 00:50:04.826396 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2026-03-11 00:50:04.826401 | orchestrator | Wednesday 11 March 2026 00:48:50 +0000 (0:00:05.604) 0:03:12.333 ******* 2026-03-11 00:50:04.826408 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/cilium-operator) 2026-03-11 00:50:04.826413 | orchestrator | FAILED - RETRYING: [testbed-node-0 -> localhost]: Wait for Cilium resources (30 retries left). 2026-03-11 00:50:04.826423 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=daemonset/cilium) 2026-03-11 00:50:04.826428 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-relay) 2026-03-11 00:50:04.826433 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=deployment/hubble-ui) 2026-03-11 00:50:04.826439 | orchestrator | 2026-03-11 00:50:04.826444 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2026-03-11 00:50:04.826450 | orchestrator | Wednesday 11 March 2026 00:49:34 +0000 (0:00:44.286) 0:03:56.620 ******* 2026-03-11 00:50:04.826462 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 00:50:04.826467 | orchestrator | 2026-03-11 00:50:04.826472 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2026-03-11 00:50:04.826478 | orchestrator | Wednesday 11 March 2026 00:49:35 +0000 (0:00:01.179) 0:03:57.799 ******* 2026-03-11 00:50:04.826483 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-11 00:50:04.826489 | orchestrator | 2026-03-11 00:50:04.826494 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2026-03-11 00:50:04.826499 | orchestrator | Wednesday 11 March 2026 00:49:37 +0000 (0:00:01.483) 0:03:59.283 ******* 2026-03-11 00:50:04.826504 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-11 00:50:04.826510 | orchestrator | 2026-03-11 00:50:04.826515 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2026-03-11 00:50:04.826523 | orchestrator | Wednesday 11 March 2026 00:49:38 +0000 (0:00:01.022) 0:04:00.305 ******* 2026-03-11 00:50:04.826529 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.826534 | orchestrator | 2026-03-11 00:50:04.826540 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2026-03-11 00:50:04.826545 | orchestrator | Wednesday 11 March 2026 00:49:38 +0000 (0:00:00.105) 0:04:00.411 ******* 2026-03-11 00:50:04.826550 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io) 2026-03-11 00:50:04.826556 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io) 2026-03-11 00:50:04.826561 | orchestrator | 2026-03-11 00:50:04.826566 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2026-03-11 00:50:04.826572 | orchestrator | Wednesday 11 March 2026 00:49:39 +0000 (0:00:01.510) 0:04:01.921 ******* 2026-03-11 00:50:04.826577 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.826582 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.826587 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.826593 | orchestrator | 2026-03-11 00:50:04.826598 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2026-03-11 00:50:04.826603 | orchestrator | Wednesday 11 March 2026 00:49:40 +0000 (0:00:00.277) 0:04:02.199 ******* 2026-03-11 00:50:04.826609 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.826614 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.826619 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.826624 | orchestrator | 2026-03-11 00:50:04.826628 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2026-03-11 00:50:04.826632 | orchestrator | 2026-03-11 00:50:04.826636 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2026-03-11 00:50:04.826639 | orchestrator | Wednesday 11 March 2026 00:49:41 +0000 (0:00:01.026) 0:04:03.225 ******* 2026-03-11 00:50:04.826643 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:04.826646 | orchestrator | 2026-03-11 00:50:04.826650 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2026-03-11 00:50:04.826654 | orchestrator | Wednesday 11 March 2026 00:49:41 +0000 (0:00:00.131) 0:04:03.357 ******* 2026-03-11 00:50:04.826657 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2026-03-11 00:50:04.826661 | orchestrator | 2026-03-11 00:50:04.826665 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2026-03-11 00:50:04.826668 | orchestrator | Wednesday 11 March 2026 00:49:41 +0000 (0:00:00.201) 0:04:03.559 ******* 2026-03-11 00:50:04.826676 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:04.826679 | orchestrator | 2026-03-11 00:50:04.826683 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2026-03-11 00:50:04.826687 | orchestrator | 2026-03-11 00:50:04.826690 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2026-03-11 00:50:04.826694 | orchestrator | Wednesday 11 March 2026 00:49:46 +0000 (0:00:05.173) 0:04:08.733 ******* 2026-03-11 00:50:04.826698 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:50:04.826701 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:50:04.826705 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:50:04.826708 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:04.826712 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:04.826716 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:04.826719 | orchestrator | 2026-03-11 00:50:04.826723 | orchestrator | TASK [Manage labels] *********************************************************** 2026-03-11 00:50:04.826727 | orchestrator | Wednesday 11 March 2026 00:49:47 +0000 (0:00:00.630) 0:04:09.364 ******* 2026-03-11 00:50:04.826731 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-11 00:50:04.826734 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-11 00:50:04.826738 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-11 00:50:04.826742 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2026-03-11 00:50:04.826748 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-11 00:50:04.826755 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-11 00:50:04.826761 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2026-03-11 00:50:04.826766 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-11 00:50:04.826771 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2026-03-11 00:50:04.826776 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-11 00:50:04.826782 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-11 00:50:04.826787 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-11 00:50:04.826796 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-11 00:50:04.826800 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2026-03-11 00:50:04.826803 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2026-03-11 00:50:04.826806 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-11 00:50:04.826809 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-11 00:50:04.826812 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2026-03-11 00:50:04.826815 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-11 00:50:04.826821 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-11 00:50:04.826824 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2026-03-11 00:50:04.826827 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-11 00:50:04.826830 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-11 00:50:04.826847 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-11 00:50:04.826852 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2026-03-11 00:50:04.826859 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-11 00:50:04.826862 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-11 00:50:04.826865 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2026-03-11 00:50:04.826868 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-11 00:50:04.826871 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2026-03-11 00:50:04.826874 | orchestrator | 2026-03-11 00:50:04.826877 | orchestrator | TASK [Manage annotations] ****************************************************** 2026-03-11 00:50:04.826880 | orchestrator | Wednesday 11 March 2026 00:50:00 +0000 (0:00:13.635) 0:04:22.999 ******* 2026-03-11 00:50:04.826884 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.826887 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.826891 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.826896 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.826901 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.826905 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.826909 | orchestrator | 2026-03-11 00:50:04.826914 | orchestrator | TASK [Manage taints] *********************************************************** 2026-03-11 00:50:04.826919 | orchestrator | Wednesday 11 March 2026 00:50:01 +0000 (0:00:00.688) 0:04:23.688 ******* 2026-03-11 00:50:04.826923 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:50:04.826928 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:50:04.826933 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:50:04.826939 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:04.826944 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:04.826949 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:04.826954 | orchestrator | 2026-03-11 00:50:04.826961 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:50:04.826964 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:50:04.826969 | orchestrator | testbed-node-0 : ok=50  changed=23  unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-11 00:50:04.826972 | orchestrator | testbed-node-1 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-11 00:50:04.826975 | orchestrator | testbed-node-2 : ok=38  changed=16  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-11 00:50:04.826978 | orchestrator | testbed-node-3 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-11 00:50:04.826982 | orchestrator | testbed-node-4 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-11 00:50:04.826985 | orchestrator | testbed-node-5 : ok=16  changed=8  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-11 00:50:04.826988 | orchestrator | 2026-03-11 00:50:04.826991 | orchestrator | 2026-03-11 00:50:04.826994 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:50:04.826997 | orchestrator | Wednesday 11 March 2026 00:50:01 +0000 (0:00:00.365) 0:04:24.053 ******* 2026-03-11 00:50:04.827000 | orchestrator | =============================================================================== 2026-03-11 00:50:04.827003 | orchestrator | k3s_server_post : Wait for Cilium resources ---------------------------- 44.29s 2026-03-11 00:50:04.827006 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 43.50s 2026-03-11 00:50:04.827009 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 24.57s 2026-03-11 00:50:04.827020 | orchestrator | Manage labels ---------------------------------------------------------- 13.64s 2026-03-11 00:50:04.827024 | orchestrator | kubectl : Install required packages ------------------------------------ 11.32s 2026-03-11 00:50:04.827027 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 10.25s 2026-03-11 00:50:04.827030 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.41s 2026-03-11 00:50:04.827033 | orchestrator | k3s_server_post : Install Cilium ---------------------------------------- 5.60s 2026-03-11 00:50:04.827036 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.30s 2026-03-11 00:50:04.827039 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.17s 2026-03-11 00:50:04.827042 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 3.95s 2026-03-11 00:50:04.827047 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 3.22s 2026-03-11 00:50:04.827051 | orchestrator | k3s_download : Download k3s binary arm64 -------------------------------- 3.21s 2026-03-11 00:50:04.827056 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 2.97s 2026-03-11 00:50:04.827061 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.55s 2026-03-11 00:50:04.827066 | orchestrator | k3s_server : Detect Kubernetes version for label compatibility ---------- 2.32s 2026-03-11 00:50:04.827072 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.10s 2026-03-11 00:50:04.827077 | orchestrator | k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers --- 1.95s 2026-03-11 00:50:04.827083 | orchestrator | k3s_server : Clean previous runs of k3s-init ---------------------------- 1.87s 2026-03-11 00:50:04.827088 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 1.79s 2026-03-11 00:50:04.827092 | orchestrator | 2026-03-11 00:50:04 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:04.829945 | orchestrator | 2026-03-11 00:50:04 | INFO  | Task 5becaa37-3b6b-4347-9c4c-3c6f4ce96b2c is in state STARTED 2026-03-11 00:50:04.833935 | orchestrator | 2026-03-11 00:50:04 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:04.837078 | orchestrator | 2026-03-11 00:50:04 | INFO  | Task 4f1e4fd9-97be-4291-96ae-cdfbac507040 is in state STARTED 2026-03-11 00:50:04.837178 | orchestrator | 2026-03-11 00:50:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:07.870103 | orchestrator | 2026-03-11 00:50:07 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:07.870228 | orchestrator | 2026-03-11 00:50:07 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:07.883010 | orchestrator | 2026-03-11 00:50:07 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:07.883072 | orchestrator | 2026-03-11 00:50:07 | INFO  | Task 5becaa37-3b6b-4347-9c4c-3c6f4ce96b2c is in state STARTED 2026-03-11 00:50:07.883080 | orchestrator | 2026-03-11 00:50:07 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:07.883087 | orchestrator | 2026-03-11 00:50:07 | INFO  | Task 4f1e4fd9-97be-4291-96ae-cdfbac507040 is in state STARTED 2026-03-11 00:50:07.883094 | orchestrator | 2026-03-11 00:50:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:10.908618 | orchestrator | 2026-03-11 00:50:10 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:10.910234 | orchestrator | 2026-03-11 00:50:10 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:10.910348 | orchestrator | 2026-03-11 00:50:10 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:10.910390 | orchestrator | 2026-03-11 00:50:10 | INFO  | Task 5becaa37-3b6b-4347-9c4c-3c6f4ce96b2c is in state SUCCESS 2026-03-11 00:50:10.911131 | orchestrator | 2026-03-11 00:50:10 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:10.911699 | orchestrator | 2026-03-11 00:50:10 | INFO  | Task 4f1e4fd9-97be-4291-96ae-cdfbac507040 is in state STARTED 2026-03-11 00:50:10.911785 | orchestrator | 2026-03-11 00:50:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:13.944193 | orchestrator | 2026-03-11 00:50:13 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:13.946110 | orchestrator | 2026-03-11 00:50:13 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:13.948787 | orchestrator | 2026-03-11 00:50:13 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:13.950157 | orchestrator | 2026-03-11 00:50:13 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:13.951344 | orchestrator | 2026-03-11 00:50:13 | INFO  | Task 4f1e4fd9-97be-4291-96ae-cdfbac507040 is in state SUCCESS 2026-03-11 00:50:13.951395 | orchestrator | 2026-03-11 00:50:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:16.987029 | orchestrator | 2026-03-11 00:50:16 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:16.987843 | orchestrator | 2026-03-11 00:50:16 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:16.987931 | orchestrator | 2026-03-11 00:50:16 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:16.988737 | orchestrator | 2026-03-11 00:50:16 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:16.988763 | orchestrator | 2026-03-11 00:50:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:20.035372 | orchestrator | 2026-03-11 00:50:20 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:20.037336 | orchestrator | 2026-03-11 00:50:20 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:20.039232 | orchestrator | 2026-03-11 00:50:20 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:20.042155 | orchestrator | 2026-03-11 00:50:20 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:20.042227 | orchestrator | 2026-03-11 00:50:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:23.086115 | orchestrator | 2026-03-11 00:50:23 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:23.090971 | orchestrator | 2026-03-11 00:50:23 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:23.092183 | orchestrator | 2026-03-11 00:50:23 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:23.093563 | orchestrator | 2026-03-11 00:50:23 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:23.093606 | orchestrator | 2026-03-11 00:50:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:26.153176 | orchestrator | 2026-03-11 00:50:26 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:26.156203 | orchestrator | 2026-03-11 00:50:26 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:26.156838 | orchestrator | 2026-03-11 00:50:26 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:26.157528 | orchestrator | 2026-03-11 00:50:26 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:26.157573 | orchestrator | 2026-03-11 00:50:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:29.179434 | orchestrator | 2026-03-11 00:50:29 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:29.182562 | orchestrator | 2026-03-11 00:50:29 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:29.183192 | orchestrator | 2026-03-11 00:50:29 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:29.184075 | orchestrator | 2026-03-11 00:50:29 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:29.184110 | orchestrator | 2026-03-11 00:50:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:32.246366 | orchestrator | 2026-03-11 00:50:32 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:32.246431 | orchestrator | 2026-03-11 00:50:32 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:32.246441 | orchestrator | 2026-03-11 00:50:32 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:32.246446 | orchestrator | 2026-03-11 00:50:32 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:32.246450 | orchestrator | 2026-03-11 00:50:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:35.237114 | orchestrator | 2026-03-11 00:50:35 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:35.240144 | orchestrator | 2026-03-11 00:50:35 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:35.241413 | orchestrator | 2026-03-11 00:50:35 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:35.242284 | orchestrator | 2026-03-11 00:50:35 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:35.242304 | orchestrator | 2026-03-11 00:50:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:38.277544 | orchestrator | 2026-03-11 00:50:38 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:38.279568 | orchestrator | 2026-03-11 00:50:38 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:38.280525 | orchestrator | 2026-03-11 00:50:38 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:38.282862 | orchestrator | 2026-03-11 00:50:38 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:38.282952 | orchestrator | 2026-03-11 00:50:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:41.314257 | orchestrator | 2026-03-11 00:50:41 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:41.314772 | orchestrator | 2026-03-11 00:50:41 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:41.315564 | orchestrator | 2026-03-11 00:50:41 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:41.316503 | orchestrator | 2026-03-11 00:50:41 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:41.316534 | orchestrator | 2026-03-11 00:50:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:44.353162 | orchestrator | 2026-03-11 00:50:44 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:44.354919 | orchestrator | 2026-03-11 00:50:44 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state STARTED 2026-03-11 00:50:44.355201 | orchestrator | 2026-03-11 00:50:44 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:44.356839 | orchestrator | 2026-03-11 00:50:44 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:44.356881 | orchestrator | 2026-03-11 00:50:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:47.388031 | orchestrator | 2026-03-11 00:50:47 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:47.390344 | orchestrator | 2026-03-11 00:50:47 | INFO  | Task b82d86c8-6d3a-41cb-a01e-7c07696ea097 is in state SUCCESS 2026-03-11 00:50:47.390443 | orchestrator | 2026-03-11 00:50:47.390449 | orchestrator | 2026-03-11 00:50:47.390453 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2026-03-11 00:50:47.390457 | orchestrator | 2026-03-11 00:50:47.390460 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-11 00:50:47.390464 | orchestrator | Wednesday 11 March 2026 00:50:07 +0000 (0:00:00.150) 0:00:00.150 ******* 2026-03-11 00:50:47.390468 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-11 00:50:47.390471 | orchestrator | 2026-03-11 00:50:47.390475 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-11 00:50:47.390478 | orchestrator | Wednesday 11 March 2026 00:50:08 +0000 (0:00:00.879) 0:00:01.030 ******* 2026-03-11 00:50:47.390481 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:47.390485 | orchestrator | 2026-03-11 00:50:47.390488 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2026-03-11 00:50:47.390491 | orchestrator | Wednesday 11 March 2026 00:50:09 +0000 (0:00:01.004) 0:00:02.034 ******* 2026-03-11 00:50:47.390494 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:47.390497 | orchestrator | 2026-03-11 00:50:47.390501 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:50:47.390504 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:50:47.390508 | orchestrator | 2026-03-11 00:50:47.390511 | orchestrator | 2026-03-11 00:50:47.390514 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:50:47.390518 | orchestrator | Wednesday 11 March 2026 00:50:09 +0000 (0:00:00.377) 0:00:02.412 ******* 2026-03-11 00:50:47.390521 | orchestrator | =============================================================================== 2026-03-11 00:50:47.390524 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.00s 2026-03-11 00:50:47.390527 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.88s 2026-03-11 00:50:47.390530 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.38s 2026-03-11 00:50:47.390534 | orchestrator | 2026-03-11 00:50:47.390537 | orchestrator | 2026-03-11 00:50:47.390540 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2026-03-11 00:50:47.390543 | orchestrator | 2026-03-11 00:50:47.390546 | orchestrator | TASK [Get home directory of operator user] ************************************* 2026-03-11 00:50:47.390550 | orchestrator | Wednesday 11 March 2026 00:50:05 +0000 (0:00:00.127) 0:00:00.127 ******* 2026-03-11 00:50:47.390553 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:47.390559 | orchestrator | 2026-03-11 00:50:47.390566 | orchestrator | TASK [Create .kube directory] ************************************************** 2026-03-11 00:50:47.390573 | orchestrator | Wednesday 11 March 2026 00:50:06 +0000 (0:00:00.489) 0:00:00.617 ******* 2026-03-11 00:50:47.390578 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:47.390583 | orchestrator | 2026-03-11 00:50:47.390589 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2026-03-11 00:50:47.390593 | orchestrator | Wednesday 11 March 2026 00:50:06 +0000 (0:00:00.553) 0:00:01.171 ******* 2026-03-11 00:50:47.390598 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2026-03-11 00:50:47.390616 | orchestrator | 2026-03-11 00:50:47.390621 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2026-03-11 00:50:47.390626 | orchestrator | Wednesday 11 March 2026 00:50:07 +0000 (0:00:00.704) 0:00:01.876 ******* 2026-03-11 00:50:47.390630 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:47.390635 | orchestrator | 2026-03-11 00:50:47.390639 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2026-03-11 00:50:47.390644 | orchestrator | Wednesday 11 March 2026 00:50:08 +0000 (0:00:01.410) 0:00:03.286 ******* 2026-03-11 00:50:47.390649 | orchestrator | changed: [testbed-manager] 2026-03-11 00:50:47.390654 | orchestrator | 2026-03-11 00:50:47.390666 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2026-03-11 00:50:47.390671 | orchestrator | Wednesday 11 March 2026 00:50:09 +0000 (0:00:00.644) 0:00:03.931 ******* 2026-03-11 00:50:47.390675 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-11 00:50:47.390680 | orchestrator | 2026-03-11 00:50:47.390685 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2026-03-11 00:50:47.390690 | orchestrator | Wednesday 11 March 2026 00:50:10 +0000 (0:00:01.434) 0:00:05.365 ******* 2026-03-11 00:50:47.390694 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-11 00:50:47.390698 | orchestrator | 2026-03-11 00:50:47.390703 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2026-03-11 00:50:47.390709 | orchestrator | Wednesday 11 March 2026 00:50:11 +0000 (0:00:00.830) 0:00:06.196 ******* 2026-03-11 00:50:47.390714 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:47.390719 | orchestrator | 2026-03-11 00:50:47.390724 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2026-03-11 00:50:47.390729 | orchestrator | Wednesday 11 March 2026 00:50:12 +0000 (0:00:00.386) 0:00:06.582 ******* 2026-03-11 00:50:47.390734 | orchestrator | ok: [testbed-manager] 2026-03-11 00:50:47.390739 | orchestrator | 2026-03-11 00:50:47.390744 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:50:47.390749 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:50:47.390754 | orchestrator | 2026-03-11 00:50:47.390759 | orchestrator | 2026-03-11 00:50:47.390764 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:50:47.390769 | orchestrator | Wednesday 11 March 2026 00:50:12 +0000 (0:00:00.281) 0:00:06.864 ******* 2026-03-11 00:50:47.390775 | orchestrator | =============================================================================== 2026-03-11 00:50:47.390780 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.43s 2026-03-11 00:50:47.390785 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.41s 2026-03-11 00:50:47.390792 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.83s 2026-03-11 00:50:47.390813 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.71s 2026-03-11 00:50:47.390823 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.64s 2026-03-11 00:50:47.390828 | orchestrator | Create .kube directory -------------------------------------------------- 0.55s 2026-03-11 00:50:47.390833 | orchestrator | Get home directory of operator user ------------------------------------- 0.49s 2026-03-11 00:50:47.390838 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.39s 2026-03-11 00:50:47.390842 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.28s 2026-03-11 00:50:47.390848 | orchestrator | 2026-03-11 00:50:47.392734 | orchestrator | 2026-03-11 00:50:47.392783 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2026-03-11 00:50:47.392795 | orchestrator | 2026-03-11 00:50:47.392805 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-11 00:50:47.392815 | orchestrator | Wednesday 11 March 2026 00:48:35 +0000 (0:00:00.180) 0:00:00.180 ******* 2026-03-11 00:50:47.392824 | orchestrator | ok: [localhost] => { 2026-03-11 00:50:47.392849 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2026-03-11 00:50:47.392858 | orchestrator | } 2026-03-11 00:50:47.392868 | orchestrator | 2026-03-11 00:50:47.392876 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2026-03-11 00:50:47.392898 | orchestrator | Wednesday 11 March 2026 00:48:35 +0000 (0:00:00.103) 0:00:00.283 ******* 2026-03-11 00:50:47.392909 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2026-03-11 00:50:47.392918 | orchestrator | ...ignoring 2026-03-11 00:50:47.392927 | orchestrator | 2026-03-11 00:50:47.392935 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2026-03-11 00:50:47.392943 | orchestrator | Wednesday 11 March 2026 00:48:39 +0000 (0:00:03.806) 0:00:04.090 ******* 2026-03-11 00:50:47.392952 | orchestrator | skipping: [localhost] 2026-03-11 00:50:47.392960 | orchestrator | 2026-03-11 00:50:47.392968 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2026-03-11 00:50:47.392977 | orchestrator | Wednesday 11 March 2026 00:48:39 +0000 (0:00:00.166) 0:00:04.256 ******* 2026-03-11 00:50:47.392985 | orchestrator | ok: [localhost] 2026-03-11 00:50:47.392993 | orchestrator | 2026-03-11 00:50:47.393001 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:50:47.393009 | orchestrator | 2026-03-11 00:50:47.393018 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:50:47.393026 | orchestrator | Wednesday 11 March 2026 00:48:40 +0000 (0:00:00.361) 0:00:04.618 ******* 2026-03-11 00:50:47.393034 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:47.393042 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:47.393050 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:47.393058 | orchestrator | 2026-03-11 00:50:47.393066 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:50:47.393074 | orchestrator | Wednesday 11 March 2026 00:48:40 +0000 (0:00:00.530) 0:00:05.149 ******* 2026-03-11 00:50:47.393081 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2026-03-11 00:50:47.393090 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2026-03-11 00:50:47.393098 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2026-03-11 00:50:47.393105 | orchestrator | 2026-03-11 00:50:47.393113 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2026-03-11 00:50:47.393121 | orchestrator | 2026-03-11 00:50:47.393129 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-11 00:50:47.393145 | orchestrator | Wednesday 11 March 2026 00:48:41 +0000 (0:00:01.319) 0:00:06.468 ******* 2026-03-11 00:50:47.393165 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:50:47.393175 | orchestrator | 2026-03-11 00:50:47.393183 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-11 00:50:47.393193 | orchestrator | Wednesday 11 March 2026 00:48:42 +0000 (0:00:00.638) 0:00:07.106 ******* 2026-03-11 00:50:47.393201 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:47.393209 | orchestrator | 2026-03-11 00:50:47.393217 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2026-03-11 00:50:47.393225 | orchestrator | Wednesday 11 March 2026 00:48:43 +0000 (0:00:01.107) 0:00:08.214 ******* 2026-03-11 00:50:47.393233 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:47.393242 | orchestrator | 2026-03-11 00:50:47.393250 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2026-03-11 00:50:47.393259 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:00.331) 0:00:08.546 ******* 2026-03-11 00:50:47.393268 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:47.393278 | orchestrator | 2026-03-11 00:50:47.393287 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2026-03-11 00:50:47.393296 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:00.331) 0:00:08.877 ******* 2026-03-11 00:50:47.393314 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:47.393323 | orchestrator | 2026-03-11 00:50:47.393333 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2026-03-11 00:50:47.393342 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:00.337) 0:00:09.215 ******* 2026-03-11 00:50:47.393351 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:47.393361 | orchestrator | 2026-03-11 00:50:47.393369 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-11 00:50:47.393378 | orchestrator | Wednesday 11 March 2026 00:48:45 +0000 (0:00:01.152) 0:00:10.367 ******* 2026-03-11 00:50:47.393387 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:50:47.393396 | orchestrator | 2026-03-11 00:50:47.393405 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2026-03-11 00:50:47.393414 | orchestrator | Wednesday 11 March 2026 00:48:46 +0000 (0:00:00.777) 0:00:11.145 ******* 2026-03-11 00:50:47.393424 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:47.393433 | orchestrator | 2026-03-11 00:50:47.393442 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2026-03-11 00:50:47.393451 | orchestrator | Wednesday 11 March 2026 00:48:47 +0000 (0:00:00.901) 0:00:12.046 ******* 2026-03-11 00:50:47.393461 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:47.393470 | orchestrator | 2026-03-11 00:50:47.393478 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2026-03-11 00:50:47.393486 | orchestrator | Wednesday 11 March 2026 00:48:47 +0000 (0:00:00.331) 0:00:12.378 ******* 2026-03-11 00:50:47.393494 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:47.393502 | orchestrator | 2026-03-11 00:50:47.393526 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2026-03-11 00:50:47.393535 | orchestrator | Wednesday 11 March 2026 00:48:48 +0000 (0:00:00.363) 0:00:12.741 ******* 2026-03-11 00:50:47.393547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:50:47.393563 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:50:47.393581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:50:47.393590 | orchestrator | 2026-03-11 00:50:47.393599 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2026-03-11 00:50:47.393607 | orchestrator | Wednesday 11 March 2026 00:48:49 +0000 (0:00:01.450) 0:00:14.192 ******* 2026-03-11 00:50:47.393624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:50:47.393634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:50:47.393648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:50:47.393663 | orchestrator | 2026-03-11 00:50:47.393671 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2026-03-11 00:50:47.393679 | orchestrator | Wednesday 11 March 2026 00:48:52 +0000 (0:00:03.022) 0:00:17.214 ******* 2026-03-11 00:50:47.393688 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-11 00:50:47.393696 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-11 00:50:47.393704 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2026-03-11 00:50:47.393711 | orchestrator | 2026-03-11 00:50:47.393719 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2026-03-11 00:50:47.393727 | orchestrator | Wednesday 11 March 2026 00:48:55 +0000 (0:00:02.677) 0:00:19.891 ******* 2026-03-11 00:50:47.393735 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-11 00:50:47.393745 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-11 00:50:47.393753 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2026-03-11 00:50:47.393762 | orchestrator | 2026-03-11 00:50:47.393771 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2026-03-11 00:50:47.393780 | orchestrator | Wednesday 11 March 2026 00:48:57 +0000 (0:00:02.455) 0:00:22.347 ******* 2026-03-11 00:50:47.393790 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-11 00:50:47.393799 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-11 00:50:47.393808 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2026-03-11 00:50:47.393817 | orchestrator | 2026-03-11 00:50:47.393826 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2026-03-11 00:50:47.393836 | orchestrator | Wednesday 11 March 2026 00:48:59 +0000 (0:00:01.956) 0:00:24.304 ******* 2026-03-11 00:50:47.393851 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-11 00:50:47.393860 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-11 00:50:47.393868 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2026-03-11 00:50:47.393877 | orchestrator | 2026-03-11 00:50:47.394076 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2026-03-11 00:50:47.394096 | orchestrator | Wednesday 11 March 2026 00:49:02 +0000 (0:00:02.264) 0:00:26.569 ******* 2026-03-11 00:50:47.394105 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-11 00:50:47.394114 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-11 00:50:47.394123 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2026-03-11 00:50:47.394131 | orchestrator | 2026-03-11 00:50:47.394139 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2026-03-11 00:50:47.394148 | orchestrator | Wednesday 11 March 2026 00:49:03 +0000 (0:00:01.450) 0:00:28.019 ******* 2026-03-11 00:50:47.394156 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-11 00:50:47.394174 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-11 00:50:47.394183 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2026-03-11 00:50:47.394191 | orchestrator | 2026-03-11 00:50:47.394199 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2026-03-11 00:50:47.394207 | orchestrator | Wednesday 11 March 2026 00:49:04 +0000 (0:00:01.355) 0:00:29.374 ******* 2026-03-11 00:50:47.394215 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:47.394223 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:47.394232 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:47.394240 | orchestrator | 2026-03-11 00:50:47.394248 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2026-03-11 00:50:47.394257 | orchestrator | Wednesday 11 March 2026 00:49:05 +0000 (0:00:00.532) 0:00:29.907 ******* 2026-03-11 00:50:47.394272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:50:47.394283 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:50:47.394303 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:50:47.394321 | orchestrator | 2026-03-11 00:50:47.394331 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2026-03-11 00:50:47.394340 | orchestrator | Wednesday 11 March 2026 00:49:06 +0000 (0:00:01.389) 0:00:31.297 ******* 2026-03-11 00:50:47.394349 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:47.394358 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:47.394367 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:47.394376 | orchestrator | 2026-03-11 00:50:47.394385 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2026-03-11 00:50:47.394394 | orchestrator | Wednesday 11 March 2026 00:49:07 +0000 (0:00:00.849) 0:00:32.147 ******* 2026-03-11 00:50:47.394403 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:47.394412 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:47.394421 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:47.394430 | orchestrator | 2026-03-11 00:50:47.394439 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2026-03-11 00:50:47.394448 | orchestrator | Wednesday 11 March 2026 00:49:13 +0000 (0:00:06.145) 0:00:38.292 ******* 2026-03-11 00:50:47.394457 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:47.394466 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:47.394475 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:47.394484 | orchestrator | 2026-03-11 00:50:47.394493 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-11 00:50:47.394502 | orchestrator | 2026-03-11 00:50:47.394511 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-11 00:50:47.394521 | orchestrator | Wednesday 11 March 2026 00:49:14 +0000 (0:00:00.494) 0:00:38.786 ******* 2026-03-11 00:50:47.394530 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:47.394540 | orchestrator | 2026-03-11 00:50:47.394549 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-11 00:50:47.394558 | orchestrator | Wednesday 11 March 2026 00:49:14 +0000 (0:00:00.629) 0:00:39.416 ******* 2026-03-11 00:50:47.394566 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:50:47.394575 | orchestrator | 2026-03-11 00:50:47.394583 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-11 00:50:47.394591 | orchestrator | Wednesday 11 March 2026 00:49:15 +0000 (0:00:00.225) 0:00:39.641 ******* 2026-03-11 00:50:47.394599 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:47.394608 | orchestrator | 2026-03-11 00:50:47.394617 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-11 00:50:47.394626 | orchestrator | Wednesday 11 March 2026 00:49:16 +0000 (0:00:01.648) 0:00:41.290 ******* 2026-03-11 00:50:47.394634 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:50:47.394644 | orchestrator | 2026-03-11 00:50:47.394653 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-11 00:50:47.394661 | orchestrator | 2026-03-11 00:50:47.394670 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-11 00:50:47.394680 | orchestrator | Wednesday 11 March 2026 00:50:12 +0000 (0:00:56.094) 0:01:37.384 ******* 2026-03-11 00:50:47.394689 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:47.394698 | orchestrator | 2026-03-11 00:50:47.394707 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-11 00:50:47.394715 | orchestrator | Wednesday 11 March 2026 00:50:13 +0000 (0:00:00.574) 0:01:37.958 ******* 2026-03-11 00:50:47.394724 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:50:47.394733 | orchestrator | 2026-03-11 00:50:47.394742 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-11 00:50:47.394751 | orchestrator | Wednesday 11 March 2026 00:50:13 +0000 (0:00:00.207) 0:01:38.165 ******* 2026-03-11 00:50:47.394760 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:47.394768 | orchestrator | 2026-03-11 00:50:47.394777 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-11 00:50:47.394794 | orchestrator | Wednesday 11 March 2026 00:50:15 +0000 (0:00:01.846) 0:01:40.012 ******* 2026-03-11 00:50:47.394804 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:50:47.394813 | orchestrator | 2026-03-11 00:50:47.394821 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2026-03-11 00:50:47.394830 | orchestrator | 2026-03-11 00:50:47.394839 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2026-03-11 00:50:47.394848 | orchestrator | Wednesday 11 March 2026 00:50:27 +0000 (0:00:12.282) 0:01:52.295 ******* 2026-03-11 00:50:47.394856 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:47.394864 | orchestrator | 2026-03-11 00:50:47.394873 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2026-03-11 00:50:47.394882 | orchestrator | Wednesday 11 March 2026 00:50:28 +0000 (0:00:00.512) 0:01:52.807 ******* 2026-03-11 00:50:47.394946 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:50:47.394956 | orchestrator | 2026-03-11 00:50:47.394964 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2026-03-11 00:50:47.394974 | orchestrator | Wednesday 11 March 2026 00:50:28 +0000 (0:00:00.180) 0:01:52.988 ******* 2026-03-11 00:50:47.394983 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:47.394991 | orchestrator | 2026-03-11 00:50:47.395000 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2026-03-11 00:50:47.395017 | orchestrator | Wednesday 11 March 2026 00:50:29 +0000 (0:00:01.336) 0:01:54.324 ******* 2026-03-11 00:50:47.395026 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:50:47.395036 | orchestrator | 2026-03-11 00:50:47.395045 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2026-03-11 00:50:47.395054 | orchestrator | 2026-03-11 00:50:47.395063 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2026-03-11 00:50:47.395073 | orchestrator | Wednesday 11 March 2026 00:50:42 +0000 (0:00:12.346) 0:02:06.671 ******* 2026-03-11 00:50:47.395082 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:50:47.395090 | orchestrator | 2026-03-11 00:50:47.395100 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2026-03-11 00:50:47.395109 | orchestrator | Wednesday 11 March 2026 00:50:42 +0000 (0:00:00.488) 0:02:07.160 ******* 2026-03-11 00:50:47.395118 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:50:47.395128 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:50:47.395137 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:50:47.395146 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-11 00:50:47.395155 | orchestrator | enable_outward_rabbitmq_True 2026-03-11 00:50:47.395164 | orchestrator | 2026-03-11 00:50:47.395174 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2026-03-11 00:50:47.395183 | orchestrator | skipping: no hosts matched 2026-03-11 00:50:47.395191 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2026-03-11 00:50:47.395199 | orchestrator | outward_rabbitmq_restart 2026-03-11 00:50:47.395208 | orchestrator | 2026-03-11 00:50:47.395217 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2026-03-11 00:50:47.395226 | orchestrator | skipping: no hosts matched 2026-03-11 00:50:47.395235 | orchestrator | 2026-03-11 00:50:47.395244 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2026-03-11 00:50:47.395253 | orchestrator | skipping: no hosts matched 2026-03-11 00:50:47.395262 | orchestrator | 2026-03-11 00:50:47.395272 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:50:47.395282 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-11 00:50:47.395292 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-11 00:50:47.395301 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:50:47.395373 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 00:50:47.395392 | orchestrator | 2026-03-11 00:50:47.395402 | orchestrator | 2026-03-11 00:50:47.395412 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:50:47.395424 | orchestrator | Wednesday 11 March 2026 00:50:45 +0000 (0:00:02.604) 0:02:09.764 ******* 2026-03-11 00:50:47.395434 | orchestrator | =============================================================================== 2026-03-11 00:50:47.395443 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 80.72s 2026-03-11 00:50:47.395452 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.15s 2026-03-11 00:50:47.395461 | orchestrator | rabbitmq : Restart rabbitmq container ----------------------------------- 4.83s 2026-03-11 00:50:47.395469 | orchestrator | Check RabbitMQ service -------------------------------------------------- 3.81s 2026-03-11 00:50:47.395478 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 3.02s 2026-03-11 00:50:47.395486 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.68s 2026-03-11 00:50:47.395495 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.60s 2026-03-11 00:50:47.395503 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.46s 2026-03-11 00:50:47.395511 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.26s 2026-03-11 00:50:47.395519 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.96s 2026-03-11 00:50:47.395527 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.72s 2026-03-11 00:50:47.395535 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.45s 2026-03-11 00:50:47.395543 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.45s 2026-03-11 00:50:47.395551 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.39s 2026-03-11 00:50:47.395560 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.36s 2026-03-11 00:50:47.395568 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.32s 2026-03-11 00:50:47.395577 | orchestrator | rabbitmq : Catch when RabbitMQ is being downgraded ---------------------- 1.15s 2026-03-11 00:50:47.395585 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.11s 2026-03-11 00:50:47.395592 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.90s 2026-03-11 00:50:47.395601 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.85s 2026-03-11 00:50:47.395610 | orchestrator | 2026-03-11 00:50:47 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:47.396042 | orchestrator | 2026-03-11 00:50:47 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:47.396418 | orchestrator | 2026-03-11 00:50:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:50.455742 | orchestrator | 2026-03-11 00:50:50 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:50.457969 | orchestrator | 2026-03-11 00:50:50 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:50.458787 | orchestrator | 2026-03-11 00:50:50 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:50.458964 | orchestrator | 2026-03-11 00:50:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:53.486491 | orchestrator | 2026-03-11 00:50:53 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:53.487451 | orchestrator | 2026-03-11 00:50:53 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:53.488574 | orchestrator | 2026-03-11 00:50:53 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:53.488786 | orchestrator | 2026-03-11 00:50:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:56.515538 | orchestrator | 2026-03-11 00:50:56 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:56.516192 | orchestrator | 2026-03-11 00:50:56 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:56.516565 | orchestrator | 2026-03-11 00:50:56 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:56.516631 | orchestrator | 2026-03-11 00:50:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:50:59.545300 | orchestrator | 2026-03-11 00:50:59 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:50:59.545868 | orchestrator | 2026-03-11 00:50:59 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:50:59.548442 | orchestrator | 2026-03-11 00:50:59 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:50:59.548488 | orchestrator | 2026-03-11 00:50:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:02.596571 | orchestrator | 2026-03-11 00:51:02 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:02.596997 | orchestrator | 2026-03-11 00:51:02 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:02.597470 | orchestrator | 2026-03-11 00:51:02 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:02.597590 | orchestrator | 2026-03-11 00:51:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:05.636646 | orchestrator | 2026-03-11 00:51:05 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:05.637956 | orchestrator | 2026-03-11 00:51:05 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:05.640738 | orchestrator | 2026-03-11 00:51:05 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:05.640771 | orchestrator | 2026-03-11 00:51:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:08.676760 | orchestrator | 2026-03-11 00:51:08 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:08.677219 | orchestrator | 2026-03-11 00:51:08 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:08.677992 | orchestrator | 2026-03-11 00:51:08 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:08.678098 | orchestrator | 2026-03-11 00:51:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:11.707393 | orchestrator | 2026-03-11 00:51:11 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:11.707864 | orchestrator | 2026-03-11 00:51:11 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:11.708844 | orchestrator | 2026-03-11 00:51:11 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:11.708919 | orchestrator | 2026-03-11 00:51:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:14.744187 | orchestrator | 2026-03-11 00:51:14 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:14.745772 | orchestrator | 2026-03-11 00:51:14 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:14.747254 | orchestrator | 2026-03-11 00:51:14 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:14.747415 | orchestrator | 2026-03-11 00:51:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:17.793338 | orchestrator | 2026-03-11 00:51:17 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:17.794953 | orchestrator | 2026-03-11 00:51:17 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:17.798386 | orchestrator | 2026-03-11 00:51:17 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:17.798472 | orchestrator | 2026-03-11 00:51:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:20.837979 | orchestrator | 2026-03-11 00:51:20 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:20.838724 | orchestrator | 2026-03-11 00:51:20 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:20.840280 | orchestrator | 2026-03-11 00:51:20 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:20.840495 | orchestrator | 2026-03-11 00:51:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:23.881130 | orchestrator | 2026-03-11 00:51:23 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:23.881452 | orchestrator | 2026-03-11 00:51:23 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:23.886405 | orchestrator | 2026-03-11 00:51:23 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:23.886499 | orchestrator | 2026-03-11 00:51:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:26.922389 | orchestrator | 2026-03-11 00:51:26 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:26.922772 | orchestrator | 2026-03-11 00:51:26 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:26.923977 | orchestrator | 2026-03-11 00:51:26 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:26.924014 | orchestrator | 2026-03-11 00:51:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:29.958758 | orchestrator | 2026-03-11 00:51:29 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:29.959174 | orchestrator | 2026-03-11 00:51:29 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:29.962738 | orchestrator | 2026-03-11 00:51:29 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:29.962790 | orchestrator | 2026-03-11 00:51:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:33.025089 | orchestrator | 2026-03-11 00:51:33 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:33.031517 | orchestrator | 2026-03-11 00:51:33 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:33.033367 | orchestrator | 2026-03-11 00:51:33 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:33.033401 | orchestrator | 2026-03-11 00:51:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:36.074730 | orchestrator | 2026-03-11 00:51:36 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:36.077268 | orchestrator | 2026-03-11 00:51:36 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:36.079973 | orchestrator | 2026-03-11 00:51:36 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:36.080061 | orchestrator | 2026-03-11 00:51:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:39.132264 | orchestrator | 2026-03-11 00:51:39 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:39.135438 | orchestrator | 2026-03-11 00:51:39 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:39.137819 | orchestrator | 2026-03-11 00:51:39 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:39.138058 | orchestrator | 2026-03-11 00:51:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:42.187380 | orchestrator | 2026-03-11 00:51:42 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state STARTED 2026-03-11 00:51:42.187563 | orchestrator | 2026-03-11 00:51:42 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:42.187586 | orchestrator | 2026-03-11 00:51:42 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:42.187591 | orchestrator | 2026-03-11 00:51:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:45.230561 | orchestrator | 2026-03-11 00:51:45 | INFO  | Task ce8264ae-20aa-419d-9d98-6f93d6e0e06f is in state SUCCESS 2026-03-11 00:51:45.231659 | orchestrator | 2026-03-11 00:51:45.231714 | orchestrator | 2026-03-11 00:51:45.231721 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:51:45.231726 | orchestrator | 2026-03-11 00:51:45.231730 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:51:45.231735 | orchestrator | Wednesday 11 March 2026 00:49:21 +0000 (0:00:00.146) 0:00:00.146 ******* 2026-03-11 00:51:45.231739 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:51:45.231744 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:51:45.231749 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:51:45.231752 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.231756 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.231760 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.231763 | orchestrator | 2026-03-11 00:51:45.231767 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:51:45.231771 | orchestrator | Wednesday 11 March 2026 00:49:22 +0000 (0:00:00.576) 0:00:00.722 ******* 2026-03-11 00:51:45.231775 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2026-03-11 00:51:45.231779 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2026-03-11 00:51:45.231783 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2026-03-11 00:51:45.231786 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2026-03-11 00:51:45.231790 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2026-03-11 00:51:45.231794 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2026-03-11 00:51:45.231798 | orchestrator | 2026-03-11 00:51:45.231802 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2026-03-11 00:51:45.231805 | orchestrator | 2026-03-11 00:51:45.231809 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2026-03-11 00:51:45.231813 | orchestrator | Wednesday 11 March 2026 00:49:23 +0000 (0:00:00.795) 0:00:01.517 ******* 2026-03-11 00:51:45.231818 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:51:45.231823 | orchestrator | 2026-03-11 00:51:45.231827 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2026-03-11 00:51:45.231831 | orchestrator | Wednesday 11 March 2026 00:49:24 +0000 (0:00:00.949) 0:00:02.466 ******* 2026-03-11 00:51:45.231847 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.231869 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.231874 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.231878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.231920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.231925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.231929 | orchestrator | 2026-03-11 00:51:45.231942 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2026-03-11 00:51:45.231946 | orchestrator | Wednesday 11 March 2026 00:49:25 +0000 (0:00:01.376) 0:00:03.842 ******* 2026-03-11 00:51:45.231950 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.231954 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.231958 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.231967 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.231974 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.231978 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.231982 | orchestrator | 2026-03-11 00:51:45.231985 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2026-03-11 00:51:45.231989 | orchestrator | Wednesday 11 March 2026 00:49:27 +0000 (0:00:01.562) 0:00:05.405 ******* 2026-03-11 00:51:45.231993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.231997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232004 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232012 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232024 | orchestrator | 2026-03-11 00:51:45.232028 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2026-03-11 00:51:45.232032 | orchestrator | Wednesday 11 March 2026 00:49:28 +0000 (0:00:01.320) 0:00:06.726 ******* 2026-03-11 00:51:45.232035 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232042 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232046 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232049 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232053 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232057 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232101 | orchestrator | 2026-03-11 00:51:45.232109 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2026-03-11 00:51:45.232113 | orchestrator | Wednesday 11 March 2026 00:49:30 +0000 (0:00:02.148) 0:00:08.875 ******* 2026-03-11 00:51:45.232117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232120 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232129 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232133 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.232148 | orchestrator | 2026-03-11 00:51:45.232324 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2026-03-11 00:51:45.232334 | orchestrator | Wednesday 11 March 2026 00:49:31 +0000 (0:00:01.369) 0:00:10.244 ******* 2026-03-11 00:51:45.232340 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:51:45.232348 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:51:45.232353 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:51:45.232364 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:51:45.232373 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:51:45.232379 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:45.232385 | orchestrator | 2026-03-11 00:51:45.232391 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2026-03-11 00:51:45.232397 | orchestrator | Wednesday 11 March 2026 00:49:34 +0000 (0:00:02.614) 0:00:12.858 ******* 2026-03-11 00:51:45.232402 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2026-03-11 00:51:45.232409 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2026-03-11 00:51:45.232416 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2026-03-11 00:51:45.232422 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2026-03-11 00:51:45.232428 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2026-03-11 00:51:45.232434 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2026-03-11 00:51:45.232440 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-11 00:51:45.232446 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-11 00:51:45.232457 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-11 00:51:45.232470 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-11 00:51:45.232476 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-11 00:51:45.232482 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2026-03-11 00:51:45.232488 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-11 00:51:45.232496 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-11 00:51:45.232502 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-11 00:51:45.232509 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-11 00:51:45.232515 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-11 00:51:45.232522 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2026-03-11 00:51:45.232528 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-11 00:51:45.232535 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-11 00:51:45.232541 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-11 00:51:45.232548 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-11 00:51:45.232554 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-11 00:51:45.232561 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2026-03-11 00:51:45.232568 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-11 00:51:45.232581 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-11 00:51:45.232588 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-11 00:51:45.232594 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-11 00:51:45.232601 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-11 00:51:45.232605 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2026-03-11 00:51:45.232608 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-11 00:51:45.232613 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-11 00:51:45.232616 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-11 00:51:45.232620 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-11 00:51:45.232624 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-11 00:51:45.232628 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2026-03-11 00:51:45.232632 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-11 00:51:45.232637 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-11 00:51:45.232651 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-11 00:51:45.232659 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-11 00:51:45.232666 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2026-03-11 00:51:45.232672 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2026-03-11 00:51:45.232679 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2026-03-11 00:51:45.232685 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2026-03-11 00:51:45.232697 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2026-03-11 00:51:45.232703 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2026-03-11 00:51:45.232710 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2026-03-11 00:51:45.232716 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-11 00:51:45.232722 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2026-03-11 00:51:45.232728 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-11 00:51:45.232734 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-11 00:51:45.232740 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-11 00:51:45.232812 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2026-03-11 00:51:45.232816 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2026-03-11 00:51:45.232820 | orchestrator | 2026-03-11 00:51:45.232824 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-11 00:51:45.232828 | orchestrator | Wednesday 11 March 2026 00:49:55 +0000 (0:00:20.783) 0:00:33.642 ******* 2026-03-11 00:51:45.232832 | orchestrator | 2026-03-11 00:51:45.232836 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-11 00:51:45.232839 | orchestrator | Wednesday 11 March 2026 00:49:55 +0000 (0:00:00.063) 0:00:33.706 ******* 2026-03-11 00:51:45.232843 | orchestrator | 2026-03-11 00:51:45.232847 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-11 00:51:45.232851 | orchestrator | Wednesday 11 March 2026 00:49:55 +0000 (0:00:00.063) 0:00:33.769 ******* 2026-03-11 00:51:45.232855 | orchestrator | 2026-03-11 00:51:45.232859 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-11 00:51:45.232863 | orchestrator | Wednesday 11 March 2026 00:49:55 +0000 (0:00:00.076) 0:00:33.845 ******* 2026-03-11 00:51:45.232867 | orchestrator | 2026-03-11 00:51:45.232875 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-11 00:51:45.232957 | orchestrator | Wednesday 11 March 2026 00:49:55 +0000 (0:00:00.059) 0:00:33.905 ******* 2026-03-11 00:51:45.232965 | orchestrator | 2026-03-11 00:51:45.232969 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2026-03-11 00:51:45.232972 | orchestrator | Wednesday 11 March 2026 00:49:55 +0000 (0:00:00.061) 0:00:33.967 ******* 2026-03-11 00:51:45.232982 | orchestrator | 2026-03-11 00:51:45.232986 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2026-03-11 00:51:45.232990 | orchestrator | Wednesday 11 March 2026 00:49:55 +0000 (0:00:00.060) 0:00:34.028 ******* 2026-03-11 00:51:45.232994 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:51:45.232998 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:51:45.233002 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.233006 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.233010 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:51:45.233014 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.233017 | orchestrator | 2026-03-11 00:51:45.233021 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2026-03-11 00:51:45.233025 | orchestrator | Wednesday 11 March 2026 00:49:57 +0000 (0:00:02.342) 0:00:36.371 ******* 2026-03-11 00:51:45.233029 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:45.233033 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:51:45.233036 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:51:45.233040 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:51:45.233044 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:51:45.233048 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:51:45.233051 | orchestrator | 2026-03-11 00:51:45.233055 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2026-03-11 00:51:45.233059 | orchestrator | 2026-03-11 00:51:45.233062 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-11 00:51:45.233066 | orchestrator | Wednesday 11 March 2026 00:50:22 +0000 (0:00:24.981) 0:01:01.352 ******* 2026-03-11 00:51:45.233070 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:51:45.233074 | orchestrator | 2026-03-11 00:51:45.233078 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-11 00:51:45.233081 | orchestrator | Wednesday 11 March 2026 00:50:23 +0000 (0:00:00.622) 0:01:01.975 ******* 2026-03-11 00:51:45.233085 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:51:45.233090 | orchestrator | 2026-03-11 00:51:45.233148 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2026-03-11 00:51:45.233153 | orchestrator | Wednesday 11 March 2026 00:50:24 +0000 (0:00:00.509) 0:01:02.484 ******* 2026-03-11 00:51:45.233157 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.233161 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.233165 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.233169 | orchestrator | 2026-03-11 00:51:45.233172 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2026-03-11 00:51:45.233177 | orchestrator | Wednesday 11 March 2026 00:50:25 +0000 (0:00:01.305) 0:01:03.789 ******* 2026-03-11 00:51:45.233180 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.233184 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.233188 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.233198 | orchestrator | 2026-03-11 00:51:45.233202 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2026-03-11 00:51:45.233206 | orchestrator | Wednesday 11 March 2026 00:50:26 +0000 (0:00:00.699) 0:01:04.489 ******* 2026-03-11 00:51:45.233210 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.233214 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.233218 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.233222 | orchestrator | 2026-03-11 00:51:45.233226 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2026-03-11 00:51:45.233230 | orchestrator | Wednesday 11 March 2026 00:50:26 +0000 (0:00:00.306) 0:01:04.795 ******* 2026-03-11 00:51:45.233233 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.233237 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.233241 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.233245 | orchestrator | 2026-03-11 00:51:45.233249 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2026-03-11 00:51:45.233252 | orchestrator | Wednesday 11 March 2026 00:50:26 +0000 (0:00:00.288) 0:01:05.083 ******* 2026-03-11 00:51:45.233262 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.233266 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.233269 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.233273 | orchestrator | 2026-03-11 00:51:45.233277 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2026-03-11 00:51:45.233281 | orchestrator | Wednesday 11 March 2026 00:50:27 +0000 (0:00:00.453) 0:01:05.537 ******* 2026-03-11 00:51:45.233285 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233289 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233293 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233297 | orchestrator | 2026-03-11 00:51:45.233300 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2026-03-11 00:51:45.233304 | orchestrator | Wednesday 11 March 2026 00:50:27 +0000 (0:00:00.261) 0:01:05.798 ******* 2026-03-11 00:51:45.233310 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233317 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233322 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233330 | orchestrator | 2026-03-11 00:51:45.233339 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2026-03-11 00:51:45.233347 | orchestrator | Wednesday 11 March 2026 00:50:27 +0000 (0:00:00.233) 0:01:06.031 ******* 2026-03-11 00:51:45.233353 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233360 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233366 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233372 | orchestrator | 2026-03-11 00:51:45.233377 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2026-03-11 00:51:45.233383 | orchestrator | Wednesday 11 March 2026 00:50:27 +0000 (0:00:00.260) 0:01:06.292 ******* 2026-03-11 00:51:45.233388 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233394 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233400 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233406 | orchestrator | 2026-03-11 00:51:45.233425 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2026-03-11 00:51:45.233432 | orchestrator | Wednesday 11 March 2026 00:50:28 +0000 (0:00:00.452) 0:01:06.745 ******* 2026-03-11 00:51:45.233439 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233445 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233452 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233458 | orchestrator | 2026-03-11 00:51:45.233464 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2026-03-11 00:51:45.233470 | orchestrator | Wednesday 11 March 2026 00:50:28 +0000 (0:00:00.262) 0:01:07.007 ******* 2026-03-11 00:51:45.233476 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233482 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233488 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233494 | orchestrator | 2026-03-11 00:51:45.233500 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2026-03-11 00:51:45.233506 | orchestrator | Wednesday 11 March 2026 00:50:28 +0000 (0:00:00.263) 0:01:07.271 ******* 2026-03-11 00:51:45.233512 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233518 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233524 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233531 | orchestrator | 2026-03-11 00:51:45.233536 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2026-03-11 00:51:45.233572 | orchestrator | Wednesday 11 March 2026 00:50:29 +0000 (0:00:00.264) 0:01:07.535 ******* 2026-03-11 00:51:45.233580 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233586 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233592 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233599 | orchestrator | 2026-03-11 00:51:45.233605 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2026-03-11 00:51:45.233611 | orchestrator | Wednesday 11 March 2026 00:50:29 +0000 (0:00:00.416) 0:01:07.951 ******* 2026-03-11 00:51:45.233626 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233632 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233638 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233644 | orchestrator | 2026-03-11 00:51:45.233650 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2026-03-11 00:51:45.233656 | orchestrator | Wednesday 11 March 2026 00:50:29 +0000 (0:00:00.277) 0:01:08.229 ******* 2026-03-11 00:51:45.233662 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233668 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233674 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233680 | orchestrator | 2026-03-11 00:51:45.233686 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2026-03-11 00:51:45.233692 | orchestrator | Wednesday 11 March 2026 00:50:30 +0000 (0:00:00.329) 0:01:08.559 ******* 2026-03-11 00:51:45.233697 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233704 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233709 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233715 | orchestrator | 2026-03-11 00:51:45.233721 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2026-03-11 00:51:45.233728 | orchestrator | Wednesday 11 March 2026 00:50:30 +0000 (0:00:00.261) 0:01:08.820 ******* 2026-03-11 00:51:45.233734 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233740 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233753 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233760 | orchestrator | 2026-03-11 00:51:45.233766 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2026-03-11 00:51:45.233772 | orchestrator | Wednesday 11 March 2026 00:50:30 +0000 (0:00:00.344) 0:01:09.165 ******* 2026-03-11 00:51:45.233779 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:51:45.233785 | orchestrator | 2026-03-11 00:51:45.233790 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2026-03-11 00:51:45.233796 | orchestrator | Wednesday 11 March 2026 00:50:31 +0000 (0:00:00.863) 0:01:10.028 ******* 2026-03-11 00:51:45.233803 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.233809 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.233814 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.233820 | orchestrator | 2026-03-11 00:51:45.233827 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2026-03-11 00:51:45.233833 | orchestrator | Wednesday 11 March 2026 00:50:32 +0000 (0:00:00.514) 0:01:10.543 ******* 2026-03-11 00:51:45.233838 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.233845 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.233851 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.233857 | orchestrator | 2026-03-11 00:51:45.233862 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2026-03-11 00:51:45.233869 | orchestrator | Wednesday 11 March 2026 00:50:32 +0000 (0:00:00.537) 0:01:11.081 ******* 2026-03-11 00:51:45.233875 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233902 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233909 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233940 | orchestrator | 2026-03-11 00:51:45.233946 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2026-03-11 00:51:45.233952 | orchestrator | Wednesday 11 March 2026 00:50:33 +0000 (0:00:00.586) 0:01:11.667 ******* 2026-03-11 00:51:45.233959 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233963 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233967 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.233971 | orchestrator | 2026-03-11 00:51:45.233975 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2026-03-11 00:51:45.233979 | orchestrator | Wednesday 11 March 2026 00:50:33 +0000 (0:00:00.353) 0:01:12.021 ******* 2026-03-11 00:51:45.233983 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.233987 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.233997 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.234001 | orchestrator | 2026-03-11 00:51:45.234005 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2026-03-11 00:51:45.234009 | orchestrator | Wednesday 11 March 2026 00:50:33 +0000 (0:00:00.278) 0:01:12.299 ******* 2026-03-11 00:51:45.234013 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.234060 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.234065 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.234068 | orchestrator | 2026-03-11 00:51:45.234076 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2026-03-11 00:51:45.234080 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:00.278) 0:01:12.577 ******* 2026-03-11 00:51:45.234084 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.234088 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.234092 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.234096 | orchestrator | 2026-03-11 00:51:45.234099 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2026-03-11 00:51:45.234103 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:00.519) 0:01:13.097 ******* 2026-03-11 00:51:45.234107 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.234111 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.234115 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.234119 | orchestrator | 2026-03-11 00:51:45.234122 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-11 00:51:45.234126 | orchestrator | Wednesday 11 March 2026 00:50:35 +0000 (0:00:00.328) 0:01:13.426 ******* 2026-03-11 00:51:45.234132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234138 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234142 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234152 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234160 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234164 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234172 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234214 | orchestrator | 2026-03-11 00:51:45.234219 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-11 00:51:45.234223 | orchestrator | Wednesday 11 March 2026 00:50:36 +0000 (0:00:01.350) 0:01:14.776 ******* 2026-03-11 00:51:45.234227 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234246 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234260 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234264 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234272 | orchestrator | 2026-03-11 00:51:45.234276 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-11 00:51:45.234280 | orchestrator | Wednesday 11 March 2026 00:50:40 +0000 (0:00:03.641) 0:01:18.418 ******* 2026-03-11 00:51:45.234287 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234303 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234311 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234324 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234328 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234331 | orchestrator | 2026-03-11 00:51:45.234335 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-11 00:51:45.234339 | orchestrator | Wednesday 11 March 2026 00:50:42 +0000 (0:00:02.786) 0:01:21.205 ******* 2026-03-11 00:51:45.234343 | orchestrator | 2026-03-11 00:51:45.234347 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-11 00:51:45.234351 | orchestrator | Wednesday 11 March 2026 00:50:42 +0000 (0:00:00.069) 0:01:21.275 ******* 2026-03-11 00:51:45.234354 | orchestrator | 2026-03-11 00:51:45.234358 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-11 00:51:45.234362 | orchestrator | Wednesday 11 March 2026 00:50:42 +0000 (0:00:00.066) 0:01:21.341 ******* 2026-03-11 00:51:45.234366 | orchestrator | 2026-03-11 00:51:45.234370 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-11 00:51:45.234374 | orchestrator | Wednesday 11 March 2026 00:50:43 +0000 (0:00:00.080) 0:01:21.422 ******* 2026-03-11 00:51:45.234381 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:45.234385 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:51:45.234389 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:51:45.234412 | orchestrator | 2026-03-11 00:51:45.234417 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-11 00:51:45.234421 | orchestrator | Wednesday 11 March 2026 00:50:50 +0000 (0:00:07.577) 0:01:28.999 ******* 2026-03-11 00:51:45.234425 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:45.234429 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:51:45.234433 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:51:45.234437 | orchestrator | 2026-03-11 00:51:45.234441 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-11 00:51:45.234445 | orchestrator | Wednesday 11 March 2026 00:50:58 +0000 (0:00:07.778) 0:01:36.778 ******* 2026-03-11 00:51:45.234449 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:45.234452 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:51:45.234456 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:51:45.234460 | orchestrator | 2026-03-11 00:51:45.234464 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-11 00:51:45.234468 | orchestrator | Wednesday 11 March 2026 00:51:06 +0000 (0:00:07.747) 0:01:44.525 ******* 2026-03-11 00:51:45.234472 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.234476 | orchestrator | 2026-03-11 00:51:45.234479 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-11 00:51:45.234483 | orchestrator | Wednesday 11 March 2026 00:51:06 +0000 (0:00:00.100) 0:01:44.625 ******* 2026-03-11 00:51:45.234487 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.234491 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.234495 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.234502 | orchestrator | 2026-03-11 00:51:45.234506 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-11 00:51:45.234510 | orchestrator | Wednesday 11 March 2026 00:51:06 +0000 (0:00:00.700) 0:01:45.326 ******* 2026-03-11 00:51:45.234513 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.234518 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.234522 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:45.234526 | orchestrator | 2026-03-11 00:51:45.234530 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-11 00:51:45.234534 | orchestrator | Wednesday 11 March 2026 00:51:07 +0000 (0:00:00.541) 0:01:45.868 ******* 2026-03-11 00:51:45.234537 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.234541 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.234545 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.234549 | orchestrator | 2026-03-11 00:51:45.234553 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-11 00:51:45.234557 | orchestrator | Wednesday 11 March 2026 00:51:08 +0000 (0:00:00.689) 0:01:46.558 ******* 2026-03-11 00:51:45.234561 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.234565 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.234568 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:45.234572 | orchestrator | 2026-03-11 00:51:45.234576 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-11 00:51:45.234580 | orchestrator | Wednesday 11 March 2026 00:51:08 +0000 (0:00:00.681) 0:01:47.240 ******* 2026-03-11 00:51:45.234584 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.234588 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.234596 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.234600 | orchestrator | 2026-03-11 00:51:45.234604 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-11 00:51:45.234608 | orchestrator | Wednesday 11 March 2026 00:51:09 +0000 (0:00:00.855) 0:01:48.095 ******* 2026-03-11 00:51:45.234612 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.234616 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.234619 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.234623 | orchestrator | 2026-03-11 00:51:45.234627 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2026-03-11 00:51:45.234631 | orchestrator | Wednesday 11 March 2026 00:51:10 +0000 (0:00:00.702) 0:01:48.798 ******* 2026-03-11 00:51:45.234635 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.234639 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.234643 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.234647 | orchestrator | 2026-03-11 00:51:45.234651 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2026-03-11 00:51:45.234654 | orchestrator | Wednesday 11 March 2026 00:51:10 +0000 (0:00:00.263) 0:01:49.061 ******* 2026-03-11 00:51:45.234658 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234662 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234666 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234675 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234681 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234685 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234689 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234694 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234701 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234705 | orchestrator | 2026-03-11 00:51:45.234709 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2026-03-11 00:51:45.234713 | orchestrator | Wednesday 11 March 2026 00:51:12 +0000 (0:00:01.322) 0:01:50.384 ******* 2026-03-11 00:51:45.234717 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234743 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234748 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234752 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234772 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234778 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234791 | orchestrator | 2026-03-11 00:51:45.234797 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2026-03-11 00:51:45.234803 | orchestrator | Wednesday 11 March 2026 00:51:16 +0000 (0:00:04.036) 0:01:54.421 ******* 2026-03-11 00:51:45.234815 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234820 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234825 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234840 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234847 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234859 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 00:51:45.234863 | orchestrator | 2026-03-11 00:51:45.234867 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-11 00:51:45.234871 | orchestrator | Wednesday 11 March 2026 00:51:19 +0000 (0:00:03.051) 0:01:57.472 ******* 2026-03-11 00:51:45.234875 | orchestrator | 2026-03-11 00:51:45.234919 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-11 00:51:45.234927 | orchestrator | Wednesday 11 March 2026 00:51:19 +0000 (0:00:00.061) 0:01:57.534 ******* 2026-03-11 00:51:45.234933 | orchestrator | 2026-03-11 00:51:45.234939 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2026-03-11 00:51:45.234946 | orchestrator | Wednesday 11 March 2026 00:51:19 +0000 (0:00:00.078) 0:01:57.612 ******* 2026-03-11 00:51:45.234952 | orchestrator | 2026-03-11 00:51:45.235083 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2026-03-11 00:51:45.235089 | orchestrator | Wednesday 11 March 2026 00:51:19 +0000 (0:00:00.062) 0:01:57.675 ******* 2026-03-11 00:51:45.235093 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:51:45.235098 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:51:45.235102 | orchestrator | 2026-03-11 00:51:45.235112 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2026-03-11 00:51:45.235116 | orchestrator | Wednesday 11 March 2026 00:51:25 +0000 (0:00:06.250) 0:02:03.925 ******* 2026-03-11 00:51:45.235119 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:51:45.235124 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:51:45.235127 | orchestrator | 2026-03-11 00:51:45.235131 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2026-03-11 00:51:45.235135 | orchestrator | Wednesday 11 March 2026 00:51:31 +0000 (0:00:06.387) 0:02:10.313 ******* 2026-03-11 00:51:45.235146 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:51:45.235150 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:51:45.235154 | orchestrator | 2026-03-11 00:51:45.235158 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2026-03-11 00:51:45.235162 | orchestrator | Wednesday 11 March 2026 00:51:38 +0000 (0:00:06.522) 0:02:16.835 ******* 2026-03-11 00:51:45.235165 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:51:45.235170 | orchestrator | 2026-03-11 00:51:45.235173 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2026-03-11 00:51:45.235177 | orchestrator | Wednesday 11 March 2026 00:51:38 +0000 (0:00:00.143) 0:02:16.979 ******* 2026-03-11 00:51:45.235181 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.235185 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.235188 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.235192 | orchestrator | 2026-03-11 00:51:45.235196 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2026-03-11 00:51:45.235200 | orchestrator | Wednesday 11 March 2026 00:51:39 +0000 (0:00:00.795) 0:02:17.774 ******* 2026-03-11 00:51:45.235204 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.235208 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.235237 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:45.235243 | orchestrator | 2026-03-11 00:51:45.235246 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2026-03-11 00:51:45.235250 | orchestrator | Wednesday 11 March 2026 00:51:40 +0000 (0:00:00.767) 0:02:18.542 ******* 2026-03-11 00:51:45.235254 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.235258 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.235262 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.235266 | orchestrator | 2026-03-11 00:51:45.235270 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2026-03-11 00:51:45.235274 | orchestrator | Wednesday 11 March 2026 00:51:40 +0000 (0:00:00.803) 0:02:19.345 ******* 2026-03-11 00:51:45.235277 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:51:45.235281 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:51:45.235285 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:51:45.235289 | orchestrator | 2026-03-11 00:51:45.235292 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2026-03-11 00:51:45.235296 | orchestrator | Wednesday 11 March 2026 00:51:41 +0000 (0:00:00.729) 0:02:20.075 ******* 2026-03-11 00:51:45.235300 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.235304 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.235312 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.235316 | orchestrator | 2026-03-11 00:51:45.235320 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2026-03-11 00:51:45.235324 | orchestrator | Wednesday 11 March 2026 00:51:42 +0000 (0:00:00.921) 0:02:20.996 ******* 2026-03-11 00:51:45.235328 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:51:45.235332 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:51:45.235335 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:51:45.235339 | orchestrator | 2026-03-11 00:51:45.235343 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:51:45.235347 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-11 00:51:45.235352 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-11 00:51:45.235356 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2026-03-11 00:51:45.235360 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:51:45.235365 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:51:45.235375 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 00:51:45.235378 | orchestrator | 2026-03-11 00:51:45.235382 | orchestrator | 2026-03-11 00:51:45.235386 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:51:45.235390 | orchestrator | Wednesday 11 March 2026 00:51:43 +0000 (0:00:01.119) 0:02:22.116 ******* 2026-03-11 00:51:45.235394 | orchestrator | =============================================================================== 2026-03-11 00:51:45.235398 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 24.98s 2026-03-11 00:51:45.235402 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 20.78s 2026-03-11 00:51:45.235406 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 14.27s 2026-03-11 00:51:45.235409 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 14.17s 2026-03-11 00:51:45.235414 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 13.83s 2026-03-11 00:51:45.235418 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.04s 2026-03-11 00:51:45.235422 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.64s 2026-03-11 00:51:45.235430 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.05s 2026-03-11 00:51:45.235434 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.79s 2026-03-11 00:51:45.235437 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.61s 2026-03-11 00:51:45.235441 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.34s 2026-03-11 00:51:45.235445 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.15s 2026-03-11 00:51:45.235449 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.56s 2026-03-11 00:51:45.235453 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.38s 2026-03-11 00:51:45.235457 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.37s 2026-03-11 00:51:45.235461 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.35s 2026-03-11 00:51:45.235464 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.32s 2026-03-11 00:51:45.235468 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.32s 2026-03-11 00:51:45.235472 | orchestrator | ovn-db : Checking for any existing OVN DB container volumes ------------- 1.31s 2026-03-11 00:51:45.235476 | orchestrator | ovn-db : Wait for ovn-sb-db --------------------------------------------- 1.12s 2026-03-11 00:51:45.235480 | orchestrator | 2026-03-11 00:51:45 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:45.235484 | orchestrator | 2026-03-11 00:51:45 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:45.235488 | orchestrator | 2026-03-11 00:51:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:48.279450 | orchestrator | 2026-03-11 00:51:48 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:48.281401 | orchestrator | 2026-03-11 00:51:48 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:48.281444 | orchestrator | 2026-03-11 00:51:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:51.319128 | orchestrator | 2026-03-11 00:51:51 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:51.320677 | orchestrator | 2026-03-11 00:51:51 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:51.320759 | orchestrator | 2026-03-11 00:51:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:54.352372 | orchestrator | 2026-03-11 00:51:54 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:54.352545 | orchestrator | 2026-03-11 00:51:54 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:54.352566 | orchestrator | 2026-03-11 00:51:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:51:57.394459 | orchestrator | 2026-03-11 00:51:57 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:51:57.395428 | orchestrator | 2026-03-11 00:51:57 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:51:57.397169 | orchestrator | 2026-03-11 00:51:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:00.431333 | orchestrator | 2026-03-11 00:52:00 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:00.438283 | orchestrator | 2026-03-11 00:52:00 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:00.438359 | orchestrator | 2026-03-11 00:52:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:03.477008 | orchestrator | 2026-03-11 00:52:03 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:03.479172 | orchestrator | 2026-03-11 00:52:03 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:03.479804 | orchestrator | 2026-03-11 00:52:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:06.519203 | orchestrator | 2026-03-11 00:52:06 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:06.519453 | orchestrator | 2026-03-11 00:52:06 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:06.519476 | orchestrator | 2026-03-11 00:52:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:09.553765 | orchestrator | 2026-03-11 00:52:09 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:09.555260 | orchestrator | 2026-03-11 00:52:09 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:09.557101 | orchestrator | 2026-03-11 00:52:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:12.594819 | orchestrator | 2026-03-11 00:52:12 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:12.598455 | orchestrator | 2026-03-11 00:52:12 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:12.598575 | orchestrator | 2026-03-11 00:52:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:15.644782 | orchestrator | 2026-03-11 00:52:15 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:15.646223 | orchestrator | 2026-03-11 00:52:15 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:15.647628 | orchestrator | 2026-03-11 00:52:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:18.693207 | orchestrator | 2026-03-11 00:52:18 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:18.694786 | orchestrator | 2026-03-11 00:52:18 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:18.696068 | orchestrator | 2026-03-11 00:52:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:21.733431 | orchestrator | 2026-03-11 00:52:21 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:21.734005 | orchestrator | 2026-03-11 00:52:21 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:21.734181 | orchestrator | 2026-03-11 00:52:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:24.771646 | orchestrator | 2026-03-11 00:52:24 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:24.773490 | orchestrator | 2026-03-11 00:52:24 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:24.773580 | orchestrator | 2026-03-11 00:52:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:27.823767 | orchestrator | 2026-03-11 00:52:27 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:27.824120 | orchestrator | 2026-03-11 00:52:27 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:27.824197 | orchestrator | 2026-03-11 00:52:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:30.860342 | orchestrator | 2026-03-11 00:52:30 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:30.862798 | orchestrator | 2026-03-11 00:52:30 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:30.862934 | orchestrator | 2026-03-11 00:52:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:33.904562 | orchestrator | 2026-03-11 00:52:33 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:33.904695 | orchestrator | 2026-03-11 00:52:33 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:33.904727 | orchestrator | 2026-03-11 00:52:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:36.948148 | orchestrator | 2026-03-11 00:52:36 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:36.948787 | orchestrator | 2026-03-11 00:52:36 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:36.948810 | orchestrator | 2026-03-11 00:52:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:39.994424 | orchestrator | 2026-03-11 00:52:39 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:39.997118 | orchestrator | 2026-03-11 00:52:39 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:39.997173 | orchestrator | 2026-03-11 00:52:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:43.032360 | orchestrator | 2026-03-11 00:52:43 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:43.032453 | orchestrator | 2026-03-11 00:52:43 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:43.032656 | orchestrator | 2026-03-11 00:52:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:46.082541 | orchestrator | 2026-03-11 00:52:46 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:46.084249 | orchestrator | 2026-03-11 00:52:46 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:46.084320 | orchestrator | 2026-03-11 00:52:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:49.127191 | orchestrator | 2026-03-11 00:52:49 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:49.128834 | orchestrator | 2026-03-11 00:52:49 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:49.129390 | orchestrator | 2026-03-11 00:52:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:52.183224 | orchestrator | 2026-03-11 00:52:52 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:52.185672 | orchestrator | 2026-03-11 00:52:52 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:52.185949 | orchestrator | 2026-03-11 00:52:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:55.233281 | orchestrator | 2026-03-11 00:52:55 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:55.234576 | orchestrator | 2026-03-11 00:52:55 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:55.234940 | orchestrator | 2026-03-11 00:52:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:52:58.285275 | orchestrator | 2026-03-11 00:52:58 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:52:58.287594 | orchestrator | 2026-03-11 00:52:58 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:52:58.287648 | orchestrator | 2026-03-11 00:52:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:01.340239 | orchestrator | 2026-03-11 00:53:01 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:01.340923 | orchestrator | 2026-03-11 00:53:01 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:01.340962 | orchestrator | 2026-03-11 00:53:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:04.389920 | orchestrator | 2026-03-11 00:53:04 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:04.391578 | orchestrator | 2026-03-11 00:53:04 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:04.391644 | orchestrator | 2026-03-11 00:53:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:07.438631 | orchestrator | 2026-03-11 00:53:07 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:07.439784 | orchestrator | 2026-03-11 00:53:07 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:07.439825 | orchestrator | 2026-03-11 00:53:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:10.484684 | orchestrator | 2026-03-11 00:53:10 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:10.484755 | orchestrator | 2026-03-11 00:53:10 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:10.484761 | orchestrator | 2026-03-11 00:53:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:13.543212 | orchestrator | 2026-03-11 00:53:13 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:13.543722 | orchestrator | 2026-03-11 00:53:13 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:13.543767 | orchestrator | 2026-03-11 00:53:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:16.585611 | orchestrator | 2026-03-11 00:53:16 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:16.585724 | orchestrator | 2026-03-11 00:53:16 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:16.585741 | orchestrator | 2026-03-11 00:53:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:19.626635 | orchestrator | 2026-03-11 00:53:19 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:19.627153 | orchestrator | 2026-03-11 00:53:19 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:19.627190 | orchestrator | 2026-03-11 00:53:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:22.681722 | orchestrator | 2026-03-11 00:53:22 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:22.683320 | orchestrator | 2026-03-11 00:53:22 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:22.683365 | orchestrator | 2026-03-11 00:53:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:25.724804 | orchestrator | 2026-03-11 00:53:25 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:25.725463 | orchestrator | 2026-03-11 00:53:25 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:25.725503 | orchestrator | 2026-03-11 00:53:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:28.781169 | orchestrator | 2026-03-11 00:53:28 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:28.782282 | orchestrator | 2026-03-11 00:53:28 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:28.782324 | orchestrator | 2026-03-11 00:53:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:31.821485 | orchestrator | 2026-03-11 00:53:31 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:31.822159 | orchestrator | 2026-03-11 00:53:31 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:31.822209 | orchestrator | 2026-03-11 00:53:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:34.862272 | orchestrator | 2026-03-11 00:53:34 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:34.863175 | orchestrator | 2026-03-11 00:53:34 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:34.863822 | orchestrator | 2026-03-11 00:53:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:37.897137 | orchestrator | 2026-03-11 00:53:37 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:37.897299 | orchestrator | 2026-03-11 00:53:37 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:37.897312 | orchestrator | 2026-03-11 00:53:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:40.942532 | orchestrator | 2026-03-11 00:53:40 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:40.945352 | orchestrator | 2026-03-11 00:53:40 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:40.946554 | orchestrator | 2026-03-11 00:53:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:43.984477 | orchestrator | 2026-03-11 00:53:43 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:43.985494 | orchestrator | 2026-03-11 00:53:43 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:43.985533 | orchestrator | 2026-03-11 00:53:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:47.028472 | orchestrator | 2026-03-11 00:53:47 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:47.030334 | orchestrator | 2026-03-11 00:53:47 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:47.031186 | orchestrator | 2026-03-11 00:53:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:50.063567 | orchestrator | 2026-03-11 00:53:50 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:50.063647 | orchestrator | 2026-03-11 00:53:50 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:50.063677 | orchestrator | 2026-03-11 00:53:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:53.105299 | orchestrator | 2026-03-11 00:53:53 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:53.108478 | orchestrator | 2026-03-11 00:53:53 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:53.108624 | orchestrator | 2026-03-11 00:53:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:56.160111 | orchestrator | 2026-03-11 00:53:56 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:56.161679 | orchestrator | 2026-03-11 00:53:56 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:56.162681 | orchestrator | 2026-03-11 00:53:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:53:59.190135 | orchestrator | 2026-03-11 00:53:59 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:53:59.190392 | orchestrator | 2026-03-11 00:53:59 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:53:59.190569 | orchestrator | 2026-03-11 00:53:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:02.232526 | orchestrator | 2026-03-11 00:54:02 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:54:02.233979 | orchestrator | 2026-03-11 00:54:02 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:02.234086 | orchestrator | 2026-03-11 00:54:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:05.276386 | orchestrator | 2026-03-11 00:54:05 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:54:05.278360 | orchestrator | 2026-03-11 00:54:05 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:05.278419 | orchestrator | 2026-03-11 00:54:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:08.327917 | orchestrator | 2026-03-11 00:54:08 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:54:08.328201 | orchestrator | 2026-03-11 00:54:08 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:08.328521 | orchestrator | 2026-03-11 00:54:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:11.381171 | orchestrator | 2026-03-11 00:54:11 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:54:11.381642 | orchestrator | 2026-03-11 00:54:11 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:11.381694 | orchestrator | 2026-03-11 00:54:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:14.413973 | orchestrator | 2026-03-11 00:54:14 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:54:14.416760 | orchestrator | 2026-03-11 00:54:14 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:14.417759 | orchestrator | 2026-03-11 00:54:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:17.465264 | orchestrator | 2026-03-11 00:54:17 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:54:17.466570 | orchestrator | 2026-03-11 00:54:17 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:17.466612 | orchestrator | 2026-03-11 00:54:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:20.514284 | orchestrator | 2026-03-11 00:54:20 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:54:20.515766 | orchestrator | 2026-03-11 00:54:20 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:20.515950 | orchestrator | 2026-03-11 00:54:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:23.574286 | orchestrator | 2026-03-11 00:54:23 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state STARTED 2026-03-11 00:54:23.575557 | orchestrator | 2026-03-11 00:54:23 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:23.575591 | orchestrator | 2026-03-11 00:54:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:26.618259 | orchestrator | 2026-03-11 00:54:26 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:54:26.622483 | orchestrator | 2026-03-11 00:54:26 | INFO  | Task 5d6ae451-f41e-428f-9ca9-3c447e68b441 is in state SUCCESS 2026-03-11 00:54:26.625214 | orchestrator | 2026-03-11 00:54:26.625286 | orchestrator | 2026-03-11 00:54:26.625306 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:54:26.625325 | orchestrator | 2026-03-11 00:54:26.625337 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:54:26.625381 | orchestrator | Wednesday 11 March 2026 00:48:18 +0000 (0:00:00.271) 0:00:00.271 ******* 2026-03-11 00:54:26.625394 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.625407 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.625419 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.625431 | orchestrator | 2026-03-11 00:54:26.625443 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:54:26.625456 | orchestrator | Wednesday 11 March 2026 00:48:18 +0000 (0:00:00.472) 0:00:00.744 ******* 2026-03-11 00:54:26.625469 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2026-03-11 00:54:26.625482 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2026-03-11 00:54:26.625494 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2026-03-11 00:54:26.625506 | orchestrator | 2026-03-11 00:54:26.625518 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2026-03-11 00:54:26.625531 | orchestrator | 2026-03-11 00:54:26.625542 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-11 00:54:26.627443 | orchestrator | Wednesday 11 March 2026 00:48:19 +0000 (0:00:00.655) 0:00:01.400 ******* 2026-03-11 00:54:26.627484 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.627497 | orchestrator | 2026-03-11 00:54:26.628722 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2026-03-11 00:54:26.628745 | orchestrator | Wednesday 11 March 2026 00:48:20 +0000 (0:00:01.122) 0:00:02.522 ******* 2026-03-11 00:54:26.628752 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.628760 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.628767 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.628774 | orchestrator | 2026-03-11 00:54:26.628882 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-11 00:54:26.628896 | orchestrator | Wednesday 11 March 2026 00:48:21 +0000 (0:00:01.033) 0:00:03.555 ******* 2026-03-11 00:54:26.628905 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.628912 | orchestrator | 2026-03-11 00:54:26.628919 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2026-03-11 00:54:26.628925 | orchestrator | Wednesday 11 March 2026 00:48:22 +0000 (0:00:00.970) 0:00:04.526 ******* 2026-03-11 00:54:26.628932 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.628939 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.628945 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.628952 | orchestrator | 2026-03-11 00:54:26.628958 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2026-03-11 00:54:26.628966 | orchestrator | Wednesday 11 March 2026 00:48:23 +0000 (0:00:00.913) 0:00:05.440 ******* 2026-03-11 00:54:26.628990 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-11 00:54:26.628998 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-11 00:54:26.629005 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2026-03-11 00:54:26.629011 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-11 00:54:26.629018 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-11 00:54:26.629024 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2026-03-11 00:54:26.629031 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-11 00:54:26.629038 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-11 00:54:26.629045 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2026-03-11 00:54:26.629052 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-11 00:54:26.629059 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-11 00:54:26.629065 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2026-03-11 00:54:26.629072 | orchestrator | 2026-03-11 00:54:26.629078 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-11 00:54:26.629085 | orchestrator | Wednesday 11 March 2026 00:48:26 +0000 (0:00:03.193) 0:00:08.633 ******* 2026-03-11 00:54:26.629097 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-11 00:54:26.629104 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-11 00:54:26.629110 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-11 00:54:26.629117 | orchestrator | 2026-03-11 00:54:26.629123 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-11 00:54:26.629130 | orchestrator | Wednesday 11 March 2026 00:48:27 +0000 (0:00:00.834) 0:00:09.468 ******* 2026-03-11 00:54:26.629136 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2026-03-11 00:54:26.629143 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2026-03-11 00:54:26.629150 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2026-03-11 00:54:26.629156 | orchestrator | 2026-03-11 00:54:26.629162 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-11 00:54:26.629169 | orchestrator | Wednesday 11 March 2026 00:48:28 +0000 (0:00:01.511) 0:00:10.979 ******* 2026-03-11 00:54:26.629178 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2026-03-11 00:54:26.629189 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.629385 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2026-03-11 00:54:26.629400 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.629408 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2026-03-11 00:54:26.629415 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.629422 | orchestrator | 2026-03-11 00:54:26.629429 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2026-03-11 00:54:26.629436 | orchestrator | Wednesday 11 March 2026 00:48:30 +0000 (0:00:01.279) 0:00:12.258 ******* 2026-03-11 00:54:26.629446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.629466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.629474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.629481 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.629493 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.629501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:54:26.629517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:54:26.629525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.629541 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:54:26.629548 | orchestrator | 2026-03-11 00:54:26.629555 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2026-03-11 00:54:26.629562 | orchestrator | Wednesday 11 March 2026 00:48:32 +0000 (0:00:02.572) 0:00:14.830 ******* 2026-03-11 00:54:26.629569 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.629576 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.629584 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.629591 | orchestrator | 2026-03-11 00:54:26.629598 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2026-03-11 00:54:26.629605 | orchestrator | Wednesday 11 March 2026 00:48:33 +0000 (0:00:01.186) 0:00:16.017 ******* 2026-03-11 00:54:26.629612 | orchestrator | changed: [testbed-node-0] => (item=users) 2026-03-11 00:54:26.629619 | orchestrator | changed: [testbed-node-1] => (item=users) 2026-03-11 00:54:26.629626 | orchestrator | changed: [testbed-node-2] => (item=users) 2026-03-11 00:54:26.629634 | orchestrator | changed: [testbed-node-0] => (item=rules) 2026-03-11 00:54:26.629640 | orchestrator | changed: [testbed-node-2] => (item=rules) 2026-03-11 00:54:26.629647 | orchestrator | changed: [testbed-node-1] => (item=rules) 2026-03-11 00:54:26.629655 | orchestrator | 2026-03-11 00:54:26.629662 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2026-03-11 00:54:26.629669 | orchestrator | Wednesday 11 March 2026 00:48:36 +0000 (0:00:02.315) 0:00:18.332 ******* 2026-03-11 00:54:26.629677 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.629684 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.629691 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.629698 | orchestrator | 2026-03-11 00:54:26.629706 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2026-03-11 00:54:26.629713 | orchestrator | Wednesday 11 March 2026 00:48:38 +0000 (0:00:01.722) 0:00:20.054 ******* 2026-03-11 00:54:26.629720 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.629728 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.629735 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.629742 | orchestrator | 2026-03-11 00:54:26.629748 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2026-03-11 00:54:26.629754 | orchestrator | Wednesday 11 March 2026 00:48:39 +0000 (0:00:01.409) 0:00:21.463 ******* 2026-03-11 00:54:26.629763 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.629784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.629815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.629823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3abb7d1a68cb67b2f22798d42b49d6786c686621', '__omit_place_holder__3abb7d1a68cb67b2f22798d42b49d6786c686621'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-11 00:54:26.629831 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.629837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.629844 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.629850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.629882 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3abb7d1a68cb67b2f22798d42b49d6786c686621', '__omit_place_holder__3abb7d1a68cb67b2f22798d42b49d6786c686621'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-11 00:54:26.629894 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.629908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.629914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.629921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.629928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3abb7d1a68cb67b2f22798d42b49d6786c686621', '__omit_place_holder__3abb7d1a68cb67b2f22798d42b49d6786c686621'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-11 00:54:26.629934 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.629940 | orchestrator | 2026-03-11 00:54:26.629958 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2026-03-11 00:54:26.629965 | orchestrator | Wednesday 11 March 2026 00:48:40 +0000 (0:00:01.169) 0:00:22.633 ******* 2026-03-11 00:54:26.629971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.629981 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.630071 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3abb7d1a68cb67b2f22798d42b49d6786c686621', '__omit_place_holder__3abb7d1a68cb67b2f22798d42b49d6786c686621'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-11 00:54:26.630077 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630084 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.630161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3abb7d1a68cb67b2f22798d42b49d6786c686621', '__omit_place_holder__3abb7d1a68cb67b2f22798d42b49d6786c686621'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-11 00:54:26.630173 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.630187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__3abb7d1a68cb67b2f22798d42b49d6786c686621', '__omit_place_holder__3abb7d1a68cb67b2f22798d42b49d6786c686621'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2026-03-11 00:54:26.630193 | orchestrator | 2026-03-11 00:54:26.630199 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2026-03-11 00:54:26.630205 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:04.188) 0:00:26.822 ******* 2026-03-11 00:54:26.630212 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630235 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630254 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630266 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:54:26.630273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:54:26.630279 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:54:26.630290 | orchestrator | 2026-03-11 00:54:26.630296 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2026-03-11 00:54:26.630302 | orchestrator | Wednesday 11 March 2026 00:48:48 +0000 (0:00:03.709) 0:00:30.532 ******* 2026-03-11 00:54:26.630309 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-11 00:54:26.630318 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-11 00:54:26.630324 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2026-03-11 00:54:26.630330 | orchestrator | 2026-03-11 00:54:26.630336 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2026-03-11 00:54:26.630343 | orchestrator | Wednesday 11 March 2026 00:48:51 +0000 (0:00:02.657) 0:00:33.189 ******* 2026-03-11 00:54:26.630349 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-11 00:54:26.630355 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-11 00:54:26.630361 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2026-03-11 00:54:26.630367 | orchestrator | 2026-03-11 00:54:26.630378 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2026-03-11 00:54:26.630384 | orchestrator | Wednesday 11 March 2026 00:48:56 +0000 (0:00:05.484) 0:00:38.674 ******* 2026-03-11 00:54:26.630390 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.630396 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.630402 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.630408 | orchestrator | 2026-03-11 00:54:26.630414 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2026-03-11 00:54:26.630420 | orchestrator | Wednesday 11 March 2026 00:48:57 +0000 (0:00:00.704) 0:00:39.379 ******* 2026-03-11 00:54:26.630427 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-11 00:54:26.630433 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-11 00:54:26.630439 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2026-03-11 00:54:26.630445 | orchestrator | 2026-03-11 00:54:26.630451 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2026-03-11 00:54:26.630457 | orchestrator | Wednesday 11 March 2026 00:49:00 +0000 (0:00:03.178) 0:00:42.557 ******* 2026-03-11 00:54:26.630463 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-11 00:54:26.630470 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-11 00:54:26.630476 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2026-03-11 00:54:26.630482 | orchestrator | 2026-03-11 00:54:26.630488 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2026-03-11 00:54:26.630494 | orchestrator | Wednesday 11 March 2026 00:49:02 +0000 (0:00:02.128) 0:00:44.686 ******* 2026-03-11 00:54:26.630500 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2026-03-11 00:54:26.630507 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2026-03-11 00:54:26.630518 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2026-03-11 00:54:26.630524 | orchestrator | 2026-03-11 00:54:26.630530 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2026-03-11 00:54:26.630536 | orchestrator | Wednesday 11 March 2026 00:49:04 +0000 (0:00:01.473) 0:00:46.159 ******* 2026-03-11 00:54:26.630542 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2026-03-11 00:54:26.630548 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2026-03-11 00:54:26.630554 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2026-03-11 00:54:26.630560 | orchestrator | 2026-03-11 00:54:26.630566 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2026-03-11 00:54:26.630573 | orchestrator | Wednesday 11 March 2026 00:49:05 +0000 (0:00:01.589) 0:00:47.749 ******* 2026-03-11 00:54:26.630579 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.630585 | orchestrator | 2026-03-11 00:54:26.630591 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2026-03-11 00:54:26.630597 | orchestrator | Wednesday 11 March 2026 00:49:06 +0000 (0:00:00.677) 0:00:48.427 ******* 2026-03-11 00:54:26.630603 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630610 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630664 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.630671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:54:26.630677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:54:26.630687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:54:26.630693 | orchestrator | 2026-03-11 00:54:26.630699 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2026-03-11 00:54:26.630706 | orchestrator | Wednesday 11 March 2026 00:49:09 +0000 (0:00:02.947) 0:00:51.374 ******* 2026-03-11 00:54:26.630718 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.630725 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.630735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.630742 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.630748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.630755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.630764 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.630771 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.630777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.630788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.630823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.630830 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.630837 | orchestrator | 2026-03-11 00:54:26.630843 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2026-03-11 00:54:26.630849 | orchestrator | Wednesday 11 March 2026 00:49:10 +0000 (0:00:00.864) 0:00:52.239 ******* 2026-03-11 00:54:26.630856 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.630862 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.630869 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.630875 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.630885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.630895 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.630907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.630913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.630920 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.630926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.630932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.630939 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.630945 | orchestrator | 2026-03-11 00:54:26.630951 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-11 00:54:26.630957 | orchestrator | Wednesday 11 March 2026 00:49:11 +0000 (0:00:01.465) 0:00:53.704 ******* 2026-03-11 00:54:26.630967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.630979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.630989 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.630996 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.631002 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631032 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631039 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.631045 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631117 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.631128 | orchestrator | 2026-03-11 00:54:26.631154 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-11 00:54:26.631165 | orchestrator | Wednesday 11 March 2026 00:49:12 +0000 (0:00:00.683) 0:00:54.388 ******* 2026-03-11 00:54:26.631175 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631195 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.631201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631229 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.631240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631259 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.631265 | orchestrator | 2026-03-11 00:54:26.631271 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-11 00:54:26.631277 | orchestrator | Wednesday 11 March 2026 00:49:12 +0000 (0:00:00.546) 0:00:54.935 ******* 2026-03-11 00:54:26.631284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631310 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.631321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631340 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.631347 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631373 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.631383 | orchestrator | 2026-03-11 00:54:26.631394 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2026-03-11 00:54:26.631405 | orchestrator | Wednesday 11 March 2026 00:49:13 +0000 (0:00:00.727) 0:00:55.662 ******* 2026-03-11 00:54:26.631419 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631435 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631462 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.631468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631479 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.631488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631512 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.631518 | orchestrator | 2026-03-11 00:54:26.631524 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2026-03-11 00:54:26.631530 | orchestrator | Wednesday 11 March 2026 00:49:14 +0000 (0:00:00.803) 0:00:56.465 ******* 2026-03-11 00:54:26.631537 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631616 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.631623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631650 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631660 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.631670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631681 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631691 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631708 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.631719 | orchestrator | 2026-03-11 00:54:26.631729 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2026-03-11 00:54:26.631741 | orchestrator | Wednesday 11 March 2026 00:49:14 +0000 (0:00:00.530) 0:00:56.996 ******* 2026-03-11 00:54:26.631751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631780 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.631814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631829 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631841 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.631847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2026-03-11 00:54:26.631854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2026-03-11 00:54:26.631864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2026-03-11 00:54:26.631874 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.631884 | orchestrator | 2026-03-11 00:54:26.631894 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2026-03-11 00:54:26.631905 | orchestrator | Wednesday 11 March 2026 00:49:15 +0000 (0:00:00.729) 0:00:57.726 ******* 2026-03-11 00:54:26.631916 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-11 00:54:26.631927 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-11 00:54:26.631954 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2026-03-11 00:54:26.631961 | orchestrator | 2026-03-11 00:54:26.631968 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2026-03-11 00:54:26.631974 | orchestrator | Wednesday 11 March 2026 00:49:17 +0000 (0:00:01.595) 0:00:59.321 ******* 2026-03-11 00:54:26.631980 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-11 00:54:26.631987 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-11 00:54:26.631993 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2026-03-11 00:54:26.631999 | orchestrator | 2026-03-11 00:54:26.632006 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2026-03-11 00:54:26.632012 | orchestrator | Wednesday 11 March 2026 00:49:18 +0000 (0:00:01.443) 0:01:00.765 ******* 2026-03-11 00:54:26.632018 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-11 00:54:26.632024 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-11 00:54:26.632030 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2026-03-11 00:54:26.632045 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-11 00:54:26.632051 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.632058 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-11 00:54:26.632064 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.632070 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-11 00:54:26.632076 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.632082 | orchestrator | 2026-03-11 00:54:26.632088 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2026-03-11 00:54:26.632095 | orchestrator | Wednesday 11 March 2026 00:49:19 +0000 (0:00:00.734) 0:01:01.499 ******* 2026-03-11 00:54:26.632101 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.632108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.632118 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2026-03-11 00:54:26.632130 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.632137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.632148 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2026-03-11 00:54:26.632155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:54:26.632161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:54:26.632168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2026-03-11 00:54:26.632175 | orchestrator | 2026-03-11 00:54:26.632186 | orchestrator | TASK [include_role : aodh] ***************************************************** 2026-03-11 00:54:26.632196 | orchestrator | Wednesday 11 March 2026 00:49:22 +0000 (0:00:02.748) 0:01:04.248 ******* 2026-03-11 00:54:26.632235 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.632256 | orchestrator | 2026-03-11 00:54:26.632263 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2026-03-11 00:54:26.632269 | orchestrator | Wednesday 11 March 2026 00:49:22 +0000 (0:00:00.568) 0:01:04.816 ******* 2026-03-11 00:54:26.632281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-11 00:54:26.632294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.632312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-11 00:54:26.632351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2026-03-11 00:54:26.632358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.632379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.632390 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632396 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632415 | orchestrator | 2026-03-11 00:54:26.632422 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2026-03-11 00:54:26.632429 | orchestrator | Wednesday 11 March 2026 00:49:27 +0000 (0:00:04.225) 0:01:09.041 ******* 2026-03-11 00:54:26.632438 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-11 00:54:26.632488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.632499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632512 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.632519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-11 00:54:26.632525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.632536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632564 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.632581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2026-03-11 00:54:26.632593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.632599 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632612 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.632618 | orchestrator | 2026-03-11 00:54:26.632624 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2026-03-11 00:54:26.632630 | orchestrator | Wednesday 11 March 2026 00:49:28 +0000 (0:00:01.241) 0:01:10.283 ******* 2026-03-11 00:54:26.632637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-11 00:54:26.632645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-11 00:54:26.632652 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.632663 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-11 00:54:26.632674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-11 00:54:26.632680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2026-03-11 00:54:26.632687 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.632693 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2026-03-11 00:54:26.632699 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.632706 | orchestrator | 2026-03-11 00:54:26.632721 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2026-03-11 00:54:26.632733 | orchestrator | Wednesday 11 March 2026 00:49:29 +0000 (0:00:01.124) 0:01:11.407 ******* 2026-03-11 00:54:26.632743 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.632753 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.632763 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.632774 | orchestrator | 2026-03-11 00:54:26.632785 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2026-03-11 00:54:26.632827 | orchestrator | Wednesday 11 March 2026 00:49:30 +0000 (0:00:01.542) 0:01:12.950 ******* 2026-03-11 00:54:26.632837 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.632844 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.632850 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.632856 | orchestrator | 2026-03-11 00:54:26.632862 | orchestrator | TASK [include_role : barbican] ************************************************* 2026-03-11 00:54:26.632868 | orchestrator | Wednesday 11 March 2026 00:49:32 +0000 (0:00:01.825) 0:01:14.775 ******* 2026-03-11 00:54:26.632875 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.632881 | orchestrator | 2026-03-11 00:54:26.632887 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2026-03-11 00:54:26.632893 | orchestrator | Wednesday 11 March 2026 00:49:33 +0000 (0:00:00.720) 0:01:15.496 ******* 2026-03-11 00:54:26.632901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.632908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.632959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632966 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.632973 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.632983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.633008 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.633021 | orchestrator | 2026-03-11 00:54:26.633036 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2026-03-11 00:54:26.633047 | orchestrator | Wednesday 11 March 2026 00:49:38 +0000 (0:00:05.153) 0:01:20.649 ******* 2026-03-11 00:54:26.633065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.633075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.633086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.633096 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.633106 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.633125 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.633142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.633153 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.633167 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.633174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.633181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.633187 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.633198 | orchestrator | 2026-03-11 00:54:26.633205 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2026-03-11 00:54:26.633212 | orchestrator | Wednesday 11 March 2026 00:49:39 +0000 (0:00:00.522) 0:01:21.171 ******* 2026-03-11 00:54:26.633219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-11 00:54:26.633226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-11 00:54:26.633233 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.633239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-11 00:54:26.633246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-11 00:54:26.633252 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.633258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-11 00:54:26.633268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2026-03-11 00:54:26.633275 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.633281 | orchestrator | 2026-03-11 00:54:26.633290 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2026-03-11 00:54:26.633300 | orchestrator | Wednesday 11 March 2026 00:49:40 +0000 (0:00:01.048) 0:01:22.220 ******* 2026-03-11 00:54:26.633309 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.633325 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.633336 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.633346 | orchestrator | 2026-03-11 00:54:26.633355 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2026-03-11 00:54:26.633365 | orchestrator | Wednesday 11 March 2026 00:49:41 +0000 (0:00:01.344) 0:01:23.565 ******* 2026-03-11 00:54:26.633374 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.633384 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.633393 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.633403 | orchestrator | 2026-03-11 00:54:26.633420 | orchestrator | TASK [include_role : blazar] *************************************************** 2026-03-11 00:54:26.633431 | orchestrator | Wednesday 11 March 2026 00:49:43 +0000 (0:00:01.811) 0:01:25.376 ******* 2026-03-11 00:54:26.633439 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.633449 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.633459 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.633468 | orchestrator | 2026-03-11 00:54:26.633478 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2026-03-11 00:54:26.633487 | orchestrator | Wednesday 11 March 2026 00:49:43 +0000 (0:00:00.275) 0:01:25.652 ******* 2026-03-11 00:54:26.633497 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.633507 | orchestrator | 2026-03-11 00:54:26.633517 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2026-03-11 00:54:26.633526 | orchestrator | Wednesday 11 March 2026 00:49:44 +0000 (0:00:00.743) 0:01:26.395 ******* 2026-03-11 00:54:26.633538 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-11 00:54:26.633561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-11 00:54:26.633571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}}) 2026-03-11 00:54:26.633582 | orchestrator | 2026-03-11 00:54:26.633591 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2026-03-11 00:54:26.633602 | orchestrator | Wednesday 11 March 2026 00:49:47 +0000 (0:00:02.658) 0:01:29.054 ******* 2026-03-11 00:54:26.633627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-11 00:54:26.633639 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.633649 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-11 00:54:26.633664 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.633670 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}}}})  2026-03-11 00:54:26.633677 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.633683 | orchestrator | 2026-03-11 00:54:26.633689 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2026-03-11 00:54:26.633696 | orchestrator | Wednesday 11 March 2026 00:49:48 +0000 (0:00:01.679) 0:01:30.734 ******* 2026-03-11 00:54:26.633703 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-11 00:54:26.633712 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-11 00:54:26.633721 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.633727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-11 00:54:26.633737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-11 00:54:26.633743 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.633755 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-11 00:54:26.633762 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:7480 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:7480 check inter 2000 rise 2 fall 5']}})  2026-03-11 00:54:26.633774 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.633781 | orchestrator | 2026-03-11 00:54:26.633787 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2026-03-11 00:54:26.633851 | orchestrator | Wednesday 11 March 2026 00:49:51 +0000 (0:00:02.834) 0:01:33.569 ******* 2026-03-11 00:54:26.633859 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.633866 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.633872 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.633878 | orchestrator | 2026-03-11 00:54:26.633884 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2026-03-11 00:54:26.633890 | orchestrator | Wednesday 11 March 2026 00:49:52 +0000 (0:00:00.643) 0:01:34.212 ******* 2026-03-11 00:54:26.633896 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.633902 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.633909 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.633915 | orchestrator | 2026-03-11 00:54:26.633921 | orchestrator | TASK [include_role : cinder] *************************************************** 2026-03-11 00:54:26.633927 | orchestrator | Wednesday 11 March 2026 00:49:53 +0000 (0:00:01.092) 0:01:35.304 ******* 2026-03-11 00:54:26.633933 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.633939 | orchestrator | 2026-03-11 00:54:26.633945 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2026-03-11 00:54:26.633951 | orchestrator | Wednesday 11 March 2026 00:49:53 +0000 (0:00:00.728) 0:01:36.032 ******* 2026-03-11 00:54:26.633958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.633966 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.633978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.633999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.634041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.634049 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634059 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634082 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634102 | orchestrator | 2026-03-11 00:54:26.634109 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2026-03-11 00:54:26.634115 | orchestrator | Wednesday 11 March 2026 00:49:59 +0000 (0:00:05.095) 0:01:41.128 ******* 2026-03-11 00:54:26.634121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.634137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634191 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.634201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.634212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.634227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634244 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634633 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634643 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.634654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634661 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.634667 | orchestrator | 2026-03-11 00:54:26.634673 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2026-03-11 00:54:26.634689 | orchestrator | Wednesday 11 March 2026 00:50:00 +0000 (0:00:01.300) 0:01:42.429 ******* 2026-03-11 00:54:26.634695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-11 00:54:26.634702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-11 00:54:26.634714 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.634720 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-11 00:54:26.634726 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-11 00:54:26.634731 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.634737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-11 00:54:26.634749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2026-03-11 00:54:26.634755 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.634761 | orchestrator | 2026-03-11 00:54:26.634767 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2026-03-11 00:54:26.634772 | orchestrator | Wednesday 11 March 2026 00:50:01 +0000 (0:00:01.140) 0:01:43.569 ******* 2026-03-11 00:54:26.634778 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.634783 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.634788 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.634810 | orchestrator | 2026-03-11 00:54:26.634816 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2026-03-11 00:54:26.634821 | orchestrator | Wednesday 11 March 2026 00:50:03 +0000 (0:00:01.538) 0:01:45.107 ******* 2026-03-11 00:54:26.634827 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.634832 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.634837 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.634843 | orchestrator | 2026-03-11 00:54:26.634848 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2026-03-11 00:54:26.634853 | orchestrator | Wednesday 11 March 2026 00:50:05 +0000 (0:00:01.984) 0:01:47.091 ******* 2026-03-11 00:54:26.634859 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.634864 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.634869 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.634874 | orchestrator | 2026-03-11 00:54:26.634880 | orchestrator | TASK [include_role : cyborg] *************************************************** 2026-03-11 00:54:26.634885 | orchestrator | Wednesday 11 March 2026 00:50:05 +0000 (0:00:00.483) 0:01:47.575 ******* 2026-03-11 00:54:26.634891 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.634896 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.634902 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.634907 | orchestrator | 2026-03-11 00:54:26.634912 | orchestrator | TASK [include_role : designate] ************************************************ 2026-03-11 00:54:26.634918 | orchestrator | Wednesday 11 March 2026 00:50:05 +0000 (0:00:00.276) 0:01:47.852 ******* 2026-03-11 00:54:26.634923 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.634929 | orchestrator | 2026-03-11 00:54:26.634934 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2026-03-11 00:54:26.634944 | orchestrator | Wednesday 11 March 2026 00:50:06 +0000 (0:00:00.698) 0:01:48.551 ******* 2026-03-11 00:54:26.634950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 00:54:26.634957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 00:54:26.634968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634979 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.634991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635001 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 00:54:26.635016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 00:54:26.635025 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635037 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 00:54:26.635053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 00:54:26.635061 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635097 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635107 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635125 | orchestrator | 2026-03-11 00:54:26.635133 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2026-03-11 00:54:26.635141 | orchestrator | Wednesday 11 March 2026 00:50:10 +0000 (0:00:04.067) 0:01:52.618 ******* 2026-03-11 00:54:26.635154 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 00:54:26.635170 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 00:54:26.635180 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635243 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.635294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 00:54:26.635306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 00:54:26.635454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635518 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.635535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 00:54:26.635553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 00:54:26.635563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635573 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635596 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635611 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.635621 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.635634 | orchestrator | 2026-03-11 00:54:26.635643 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2026-03-11 00:54:26.635651 | orchestrator | Wednesday 11 March 2026 00:50:11 +0000 (0:00:00.868) 0:01:53.487 ******* 2026-03-11 00:54:26.635660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-11 00:54:26.635699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-11 00:54:26.635710 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.635719 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-11 00:54:26.635728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2026-03-11 00:54:26.635737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-11 00:54:26.635747 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.635756 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2026-03-11 00:54:26.635765 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.635773 | orchestrator | 2026-03-11 00:54:26.635781 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2026-03-11 00:54:26.635789 | orchestrator | Wednesday 11 March 2026 00:50:12 +0000 (0:00:00.732) 0:01:54.219 ******* 2026-03-11 00:54:26.635819 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.635827 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.635836 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.635844 | orchestrator | 2026-03-11 00:54:26.635853 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2026-03-11 00:54:26.635861 | orchestrator | Wednesday 11 March 2026 00:50:13 +0000 (0:00:01.345) 0:01:55.565 ******* 2026-03-11 00:54:26.635868 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.635968 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.635979 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.635988 | orchestrator | 2026-03-11 00:54:26.635996 | orchestrator | TASK [include_role : etcd] ***************************************************** 2026-03-11 00:54:26.636005 | orchestrator | Wednesday 11 March 2026 00:50:15 +0000 (0:00:01.720) 0:01:57.285 ******* 2026-03-11 00:54:26.636014 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.636023 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.636031 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.636038 | orchestrator | 2026-03-11 00:54:26.636046 | orchestrator | TASK [include_role : glance] *************************************************** 2026-03-11 00:54:26.636054 | orchestrator | Wednesday 11 March 2026 00:50:15 +0000 (0:00:00.424) 0:01:57.709 ******* 2026-03-11 00:54:26.636064 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.636072 | orchestrator | 2026-03-11 00:54:26.636081 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2026-03-11 00:54:26.636089 | orchestrator | Wednesday 11 March 2026 00:50:16 +0000 (0:00:00.721) 0:01:58.431 ******* 2026-03-11 00:54:26.636118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 00:54:26.636139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-11 00:54:26.636153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 00:54:26.636175 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-11 00:54:26.636191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 00:54:26.636214 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-11 00:54:26.636224 | orchestrator | 2026-03-11 00:54:26.636232 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2026-03-11 00:54:26.636241 | orchestrator | Wednesday 11 March 2026 00:50:20 +0000 (0:00:03.835) 0:02:02.266 ******* 2026-03-11 00:54:26.636250 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 00:54:26.636295 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-11 00:54:26.636308 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.636319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 00:54:26.636388 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-11 00:54:26.636414 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.636424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 00:54:26.636469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2026-03-11 00:54:26.636488 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.636496 | orchestrator | 2026-03-11 00:54:26.636504 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2026-03-11 00:54:26.636513 | orchestrator | Wednesday 11 March 2026 00:50:23 +0000 (0:00:02.948) 0:02:05.214 ******* 2026-03-11 00:54:26.636522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-11 00:54:26.636532 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-11 00:54:26.636542 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.636550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-11 00:54:26.636560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-11 00:54:26.636568 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.636577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-11 00:54:26.636597 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2026-03-11 00:54:26.636606 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.636614 | orchestrator | 2026-03-11 00:54:26.636622 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2026-03-11 00:54:26.636630 | orchestrator | Wednesday 11 March 2026 00:50:26 +0000 (0:00:03.318) 0:02:08.533 ******* 2026-03-11 00:54:26.636737 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.636749 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.636758 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.636766 | orchestrator | 2026-03-11 00:54:26.636775 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2026-03-11 00:54:26.636783 | orchestrator | Wednesday 11 March 2026 00:50:27 +0000 (0:00:01.264) 0:02:09.798 ******* 2026-03-11 00:54:26.636810 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.636820 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.636829 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.636837 | orchestrator | 2026-03-11 00:54:26.636846 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2026-03-11 00:54:26.636863 | orchestrator | Wednesday 11 March 2026 00:50:29 +0000 (0:00:01.705) 0:02:11.503 ******* 2026-03-11 00:54:26.636871 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.636880 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.636888 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.636896 | orchestrator | 2026-03-11 00:54:26.636903 | orchestrator | TASK [include_role : grafana] ************************************************** 2026-03-11 00:54:26.636911 | orchestrator | Wednesday 11 March 2026 00:50:29 +0000 (0:00:00.419) 0:02:11.923 ******* 2026-03-11 00:54:26.636920 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.636928 | orchestrator | 2026-03-11 00:54:26.636936 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2026-03-11 00:54:26.636944 | orchestrator | Wednesday 11 March 2026 00:50:30 +0000 (0:00:00.817) 0:02:12.741 ******* 2026-03-11 00:54:26.636953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 00:54:26.636964 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 00:54:26.636983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 00:54:26.636993 | orchestrator | 2026-03-11 00:54:26.637003 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2026-03-11 00:54:26.637012 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:03.572) 0:02:16.313 ******* 2026-03-11 00:54:26.637027 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 00:54:26.637037 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.638150 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 00:54:26.638196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 00:54:26.638202 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.638207 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.638211 | orchestrator | 2026-03-11 00:54:26.638215 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2026-03-11 00:54:26.638219 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:00.465) 0:02:16.778 ******* 2026-03-11 00:54:26.638224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-11 00:54:26.638244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-11 00:54:26.638249 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.638253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-11 00:54:26.638257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-11 00:54:26.638261 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.638265 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2026-03-11 00:54:26.638269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2026-03-11 00:54:26.638272 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.638276 | orchestrator | 2026-03-11 00:54:26.638280 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2026-03-11 00:54:26.638284 | orchestrator | Wednesday 11 March 2026 00:50:35 +0000 (0:00:00.624) 0:02:17.403 ******* 2026-03-11 00:54:26.638288 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.638291 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.638295 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.638299 | orchestrator | 2026-03-11 00:54:26.638302 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2026-03-11 00:54:26.638306 | orchestrator | Wednesday 11 March 2026 00:50:36 +0000 (0:00:01.155) 0:02:18.558 ******* 2026-03-11 00:54:26.638310 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.638314 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.638317 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.638321 | orchestrator | 2026-03-11 00:54:26.638325 | orchestrator | TASK [include_role : heat] ***************************************************** 2026-03-11 00:54:26.638329 | orchestrator | Wednesday 11 March 2026 00:50:38 +0000 (0:00:02.106) 0:02:20.664 ******* 2026-03-11 00:54:26.638332 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.638336 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.638340 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.638343 | orchestrator | 2026-03-11 00:54:26.638347 | orchestrator | TASK [include_role : horizon] ************************************************** 2026-03-11 00:54:26.638351 | orchestrator | Wednesday 11 March 2026 00:50:39 +0000 (0:00:00.420) 0:02:21.085 ******* 2026-03-11 00:54:26.638355 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.638358 | orchestrator | 2026-03-11 00:54:26.638371 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2026-03-11 00:54:26.638375 | orchestrator | Wednesday 11 March 2026 00:50:39 +0000 (0:00:00.799) 0:02:21.884 ******* 2026-03-11 00:54:26.638390 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:54:26.638403 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:54:26.638413 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:54:26.638421 | orchestrator | 2026-03-11 00:54:26.638425 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2026-03-11 00:54:26.638429 | orchestrator | Wednesday 11 March 2026 00:50:43 +0000 (0:00:03.306) 0:02:25.191 ******* 2026-03-11 00:54:26.638440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:54:26.638447 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.638452 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:54:26.638456 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.638467 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:54:26.638474 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.638478 | orchestrator | 2026-03-11 00:54:26.638482 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2026-03-11 00:54:26.638486 | orchestrator | Wednesday 11 March 2026 00:50:44 +0000 (0:00:00.981) 0:02:26.173 ******* 2026-03-11 00:54:26.638491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-11 00:54:26.638497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-11 00:54:26.638503 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-11 00:54:26.638508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-11 00:54:26.638512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-11 00:54:26.638516 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.638520 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-11 00:54:26.638524 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-11 00:54:26.638528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-11 00:54:26.638535 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-11 00:54:26.638539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-11 00:54:26.638549 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.638553 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-11 00:54:26.638559 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-11 00:54:26.638563 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2026-03-11 00:54:26.638567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2026-03-11 00:54:26.638571 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2026-03-11 00:54:26.638575 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.638579 | orchestrator | 2026-03-11 00:54:26.638583 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2026-03-11 00:54:26.638586 | orchestrator | Wednesday 11 March 2026 00:50:45 +0000 (0:00:00.924) 0:02:27.098 ******* 2026-03-11 00:54:26.638590 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.638594 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.638598 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.638601 | orchestrator | 2026-03-11 00:54:26.638605 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2026-03-11 00:54:26.638609 | orchestrator | Wednesday 11 March 2026 00:50:46 +0000 (0:00:01.240) 0:02:28.338 ******* 2026-03-11 00:54:26.638613 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.638617 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.638620 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.638624 | orchestrator | 2026-03-11 00:54:26.638628 | orchestrator | TASK [include_role : influxdb] ************************************************* 2026-03-11 00:54:26.638632 | orchestrator | Wednesday 11 March 2026 00:50:48 +0000 (0:00:01.787) 0:02:30.126 ******* 2026-03-11 00:54:26.638636 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.638639 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.638643 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.638647 | orchestrator | 2026-03-11 00:54:26.638651 | orchestrator | TASK [include_role : ironic] *************************************************** 2026-03-11 00:54:26.638654 | orchestrator | Wednesday 11 March 2026 00:50:48 +0000 (0:00:00.318) 0:02:30.445 ******* 2026-03-11 00:54:26.638658 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.638662 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.638666 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.638669 | orchestrator | 2026-03-11 00:54:26.638673 | orchestrator | TASK [include_role : keystone] ************************************************* 2026-03-11 00:54:26.638677 | orchestrator | Wednesday 11 March 2026 00:50:48 +0000 (0:00:00.532) 0:02:30.977 ******* 2026-03-11 00:54:26.638681 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.638684 | orchestrator | 2026-03-11 00:54:26.638688 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2026-03-11 00:54:26.638696 | orchestrator | Wednesday 11 March 2026 00:50:49 +0000 (0:00:00.939) 0:02:31.917 ******* 2026-03-11 00:54:26.638703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 00:54:26.638710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 00:54:26.638715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 00:54:26.638719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 00:54:26.638724 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 00:54:26.638731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 00:54:26.638737 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 00:54:26.638745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 00:54:26.638749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 00:54:26.638753 | orchestrator | 2026-03-11 00:54:26.638757 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2026-03-11 00:54:26.638761 | orchestrator | Wednesday 11 March 2026 00:50:53 +0000 (0:00:03.941) 0:02:35.858 ******* 2026-03-11 00:54:26.638765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 00:54:26.638773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 00:54:26.638779 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 00:54:26.638783 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.638824 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 00:54:26.638831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 00:54:26.638838 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 00:54:26.638845 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.638849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 00:54:26.638857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 00:54:26.638864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 00:54:26.638868 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.638872 | orchestrator | 2026-03-11 00:54:26.638876 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2026-03-11 00:54:26.638889 | orchestrator | Wednesday 11 March 2026 00:50:54 +0000 (0:00:00.539) 0:02:36.398 ******* 2026-03-11 00:54:26.638895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-11 00:54:26.638903 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-11 00:54:26.638909 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.638915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-11 00:54:26.638920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-11 00:54:26.638926 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.638932 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-11 00:54:26.638943 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2026-03-11 00:54:26.638949 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.638955 | orchestrator | 2026-03-11 00:54:26.638960 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2026-03-11 00:54:26.638967 | orchestrator | Wednesday 11 March 2026 00:50:55 +0000 (0:00:00.751) 0:02:37.149 ******* 2026-03-11 00:54:26.638974 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.638981 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.638986 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.638992 | orchestrator | 2026-03-11 00:54:26.638997 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2026-03-11 00:54:26.639002 | orchestrator | Wednesday 11 March 2026 00:50:56 +0000 (0:00:01.077) 0:02:38.227 ******* 2026-03-11 00:54:26.639008 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.639014 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.639019 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.639025 | orchestrator | 2026-03-11 00:54:26.639030 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2026-03-11 00:54:26.639036 | orchestrator | Wednesday 11 March 2026 00:50:57 +0000 (0:00:01.667) 0:02:39.894 ******* 2026-03-11 00:54:26.639041 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.639046 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.639051 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.639057 | orchestrator | 2026-03-11 00:54:26.639062 | orchestrator | TASK [include_role : magnum] *************************************************** 2026-03-11 00:54:26.639068 | orchestrator | Wednesday 11 March 2026 00:50:58 +0000 (0:00:00.381) 0:02:40.275 ******* 2026-03-11 00:54:26.639076 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.639081 | orchestrator | 2026-03-11 00:54:26.639088 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2026-03-11 00:54:26.639096 | orchestrator | Wednesday 11 March 2026 00:50:59 +0000 (0:00:00.866) 0:02:41.142 ******* 2026-03-11 00:54:26.639108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 00:54:26.639119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 00:54:26.639139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 00:54:26.639155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639162 | orchestrator | 2026-03-11 00:54:26.639168 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2026-03-11 00:54:26.639175 | orchestrator | Wednesday 11 March 2026 00:51:02 +0000 (0:00:03.302) 0:02:44.445 ******* 2026-03-11 00:54:26.639185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 00:54:26.639196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639202 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.639209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 00:54:26.639215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639221 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.639235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 00:54:26.639241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639255 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.639261 | orchestrator | 2026-03-11 00:54:26.639267 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2026-03-11 00:54:26.639273 | orchestrator | Wednesday 11 March 2026 00:51:03 +0000 (0:00:00.779) 0:02:45.224 ******* 2026-03-11 00:54:26.639279 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-11 00:54:26.639284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-11 00:54:26.639288 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.639292 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-11 00:54:26.639296 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-11 00:54:26.639300 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.639304 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2026-03-11 00:54:26.639307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2026-03-11 00:54:26.639311 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.639315 | orchestrator | 2026-03-11 00:54:26.639319 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2026-03-11 00:54:26.639322 | orchestrator | Wednesday 11 March 2026 00:51:03 +0000 (0:00:00.801) 0:02:46.025 ******* 2026-03-11 00:54:26.639326 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.639330 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.639334 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.639338 | orchestrator | 2026-03-11 00:54:26.639342 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2026-03-11 00:54:26.639346 | orchestrator | Wednesday 11 March 2026 00:51:05 +0000 (0:00:01.244) 0:02:47.270 ******* 2026-03-11 00:54:26.639349 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.639353 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.639357 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.639360 | orchestrator | 2026-03-11 00:54:26.639364 | orchestrator | TASK [include_role : manila] *************************************************** 2026-03-11 00:54:26.639368 | orchestrator | Wednesday 11 March 2026 00:51:07 +0000 (0:00:01.830) 0:02:49.100 ******* 2026-03-11 00:54:26.639372 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.639375 | orchestrator | 2026-03-11 00:54:26.639379 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2026-03-11 00:54:26.639383 | orchestrator | Wednesday 11 March 2026 00:51:08 +0000 (0:00:01.104) 0:02:50.205 ******* 2026-03-11 00:54:26.639393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-11 00:54:26.639409 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-11 00:54:26.639438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2026-03-11 00:54:26.639454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639460 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639466 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639481 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639499 | orchestrator | 2026-03-11 00:54:26.639506 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2026-03-11 00:54:26.639512 | orchestrator | Wednesday 11 March 2026 00:51:11 +0000 (0:00:03.112) 0:02:53.318 ******* 2026-03-11 00:54:26.639528 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-11 00:54:26.639536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639549 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639556 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.639562 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-11 00:54:26.639573 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639601 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.639607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2026-03-11 00:54:26.639614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639631 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.639638 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.639645 | orchestrator | 2026-03-11 00:54:26.639651 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2026-03-11 00:54:26.639661 | orchestrator | Wednesday 11 March 2026 00:51:11 +0000 (0:00:00.601) 0:02:53.919 ******* 2026-03-11 00:54:26.639665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-11 00:54:26.639669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-11 00:54:26.639673 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.639677 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-11 00:54:26.639687 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-11 00:54:26.639693 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.639699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2026-03-11 00:54:26.639705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2026-03-11 00:54:26.639710 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.639716 | orchestrator | 2026-03-11 00:54:26.639722 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2026-03-11 00:54:26.639728 | orchestrator | Wednesday 11 March 2026 00:51:12 +0000 (0:00:01.004) 0:02:54.923 ******* 2026-03-11 00:54:26.639734 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.639739 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.639745 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.639752 | orchestrator | 2026-03-11 00:54:26.639759 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2026-03-11 00:54:26.639766 | orchestrator | Wednesday 11 March 2026 00:51:14 +0000 (0:00:01.175) 0:02:56.099 ******* 2026-03-11 00:54:26.639772 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.639778 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.639784 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.639790 | orchestrator | 2026-03-11 00:54:26.639818 | orchestrator | TASK [include_role : mariadb] ************************************************** 2026-03-11 00:54:26.639824 | orchestrator | Wednesday 11 March 2026 00:51:16 +0000 (0:00:02.027) 0:02:58.127 ******* 2026-03-11 00:54:26.639830 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.639836 | orchestrator | 2026-03-11 00:54:26.639842 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2026-03-11 00:54:26.639849 | orchestrator | Wednesday 11 March 2026 00:51:17 +0000 (0:00:01.171) 0:02:59.299 ******* 2026-03-11 00:54:26.639859 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-11 00:54:26.639863 | orchestrator | 2026-03-11 00:54:26.639867 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2026-03-11 00:54:26.639871 | orchestrator | Wednesday 11 March 2026 00:51:20 +0000 (0:00:03.522) 0:03:02.821 ******* 2026-03-11 00:54:26.639881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:54:26.639892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-11 00:54:26.639899 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.639905 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:54:26.639922 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-11 00:54:26.639933 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.639948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:54:26.639955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-11 00:54:26.639961 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.639966 | orchestrator | 2026-03-11 00:54:26.639973 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2026-03-11 00:54:26.639979 | orchestrator | Wednesday 11 March 2026 00:51:22 +0000 (0:00:02.074) 0:03:04.896 ******* 2026-03-11 00:54:26.639991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:54:26.639998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-11 00:54:26.640004 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.640031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:54:26.640043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-11 00:54:26.640048 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.640058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:54:26.640069 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2026-03-11 00:54:26.640075 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.640081 | orchestrator | 2026-03-11 00:54:26.640087 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2026-03-11 00:54:26.640092 | orchestrator | Wednesday 11 March 2026 00:51:25 +0000 (0:00:02.373) 0:03:07.269 ******* 2026-03-11 00:54:26.640099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-11 00:54:26.640110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-11 00:54:26.640117 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.640123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-11 00:54:26.640130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-11 00:54:26.640135 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.640141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-11 00:54:26.640148 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2026-03-11 00:54:26.640153 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.640157 | orchestrator | 2026-03-11 00:54:26.640161 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2026-03-11 00:54:26.640164 | orchestrator | Wednesday 11 March 2026 00:51:28 +0000 (0:00:02.906) 0:03:10.175 ******* 2026-03-11 00:54:26.640168 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.640172 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.640179 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.640183 | orchestrator | 2026-03-11 00:54:26.640187 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2026-03-11 00:54:26.640190 | orchestrator | Wednesday 11 March 2026 00:51:29 +0000 (0:00:01.764) 0:03:11.940 ******* 2026-03-11 00:54:26.640194 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.640198 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.640202 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.640206 | orchestrator | 2026-03-11 00:54:26.640209 | orchestrator | TASK [include_role : masakari] ************************************************* 2026-03-11 00:54:26.640213 | orchestrator | Wednesday 11 March 2026 00:51:31 +0000 (0:00:01.686) 0:03:13.627 ******* 2026-03-11 00:54:26.640217 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.640221 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.640224 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.640228 | orchestrator | 2026-03-11 00:54:26.640232 | orchestrator | TASK [include_role : memcached] ************************************************ 2026-03-11 00:54:26.640236 | orchestrator | Wednesday 11 March 2026 00:51:31 +0000 (0:00:00.366) 0:03:13.993 ******* 2026-03-11 00:54:26.640239 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.640243 | orchestrator | 2026-03-11 00:54:26.640247 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2026-03-11 00:54:26.640251 | orchestrator | Wednesday 11 March 2026 00:51:33 +0000 (0:00:01.491) 0:03:15.485 ******* 2026-03-11 00:54:26.640256 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-11 00:54:26.640261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-11 00:54:26.640268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2026-03-11 00:54:26.640272 | orchestrator | 2026-03-11 00:54:26.640276 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2026-03-11 00:54:26.640280 | orchestrator | Wednesday 11 March 2026 00:51:35 +0000 (0:00:01.696) 0:03:17.181 ******* 2026-03-11 00:54:26.640291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-11 00:54:26.640296 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.640300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-11 00:54:26.640304 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.640307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2026-03-11 00:54:26.640311 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.640315 | orchestrator | 2026-03-11 00:54:26.640319 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2026-03-11 00:54:26.640323 | orchestrator | Wednesday 11 March 2026 00:51:35 +0000 (0:00:00.412) 0:03:17.593 ******* 2026-03-11 00:54:26.640328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-11 00:54:26.640333 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.640337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-11 00:54:26.640341 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.640344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2026-03-11 00:54:26.640348 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.640352 | orchestrator | 2026-03-11 00:54:26.640356 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2026-03-11 00:54:26.640363 | orchestrator | Wednesday 11 March 2026 00:51:36 +0000 (0:00:00.942) 0:03:18.536 ******* 2026-03-11 00:54:26.640367 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.640374 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.640378 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.640381 | orchestrator | 2026-03-11 00:54:26.640385 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2026-03-11 00:54:26.640389 | orchestrator | Wednesday 11 March 2026 00:51:36 +0000 (0:00:00.495) 0:03:19.031 ******* 2026-03-11 00:54:26.640393 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.640397 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.640400 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.640404 | orchestrator | 2026-03-11 00:54:26.640408 | orchestrator | TASK [include_role : mistral] ************************************************** 2026-03-11 00:54:26.640412 | orchestrator | Wednesday 11 March 2026 00:51:38 +0000 (0:00:01.419) 0:03:20.451 ******* 2026-03-11 00:54:26.640416 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.640420 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.640424 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.640428 | orchestrator | 2026-03-11 00:54:26.640432 | orchestrator | TASK [include_role : neutron] ************************************************** 2026-03-11 00:54:26.640438 | orchestrator | Wednesday 11 March 2026 00:51:38 +0000 (0:00:00.328) 0:03:20.780 ******* 2026-03-11 00:54:26.640442 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.640446 | orchestrator | 2026-03-11 00:54:26.640450 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2026-03-11 00:54:26.640454 | orchestrator | Wednesday 11 March 2026 00:51:40 +0000 (0:00:01.493) 0:03:22.274 ******* 2026-03-11 00:54:26.640458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 00:54:26.640463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640467 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-11 00:54:26.640615 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.640626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.640630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:54:26.640647 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640655 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 00:54:26.640659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-11 00:54:26.640663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.640667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640676 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640690 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-11 00:54:26.640695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:54:26.640709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 00:54:26.640715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-11 00:54:26.640722 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640727 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.640740 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640755 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.640771 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640790 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-11 00:54:26.640814 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:54:26.640820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-11 00:54:26.640847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.640857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.640866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640872 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.640878 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-11 00:54:26.640888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640899 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:54:26.640908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:54:26.640914 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640920 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-11 00:54:26.640931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.640938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.640948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-11 00:54:26.640958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:54:26.640965 | orchestrator | 2026-03-11 00:54:26.640972 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2026-03-11 00:54:26.640978 | orchestrator | Wednesday 11 March 2026 00:51:44 +0000 (0:00:04.501) 0:03:26.775 ******* 2026-03-11 00:54:26.640985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 00:54:26.640997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641003 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-11 00:54:26.641030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.641044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.641048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:54:26.641061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-11 00:54:26.641072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 00:54:26.641081 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.641085 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-11 00:54:26.641110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641114 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:54:26.641118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-11 00:54:26.641123 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.641129 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.641140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.641153 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 00:54:26.641173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:54:26.641177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-11 00:54:26.641203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.641212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2026-03-11 00:54:26.641219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641228 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-11 00:54:26.641241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.641246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:54:26.641251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.641256 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.641263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641271 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:54:26.641280 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2026-03-11 00:54:26.641290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2026-03-11 00:54:26.641294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2026-03-11 00:54:26.641307 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2026-03-11 00:54:26.641314 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.641319 | orchestrator | 2026-03-11 00:54:26.641322 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2026-03-11 00:54:26.641326 | orchestrator | Wednesday 11 March 2026 00:51:46 +0000 (0:00:01.474) 0:03:28.250 ******* 2026-03-11 00:54:26.641331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-11 00:54:26.641336 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-11 00:54:26.641340 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.641344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-11 00:54:26.641348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-11 00:54:26.641352 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.641356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2026-03-11 00:54:26.641360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2026-03-11 00:54:26.641363 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.641367 | orchestrator | 2026-03-11 00:54:26.641371 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2026-03-11 00:54:26.641375 | orchestrator | Wednesday 11 March 2026 00:51:48 +0000 (0:00:02.041) 0:03:30.291 ******* 2026-03-11 00:54:26.641379 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.641383 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.641387 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.641391 | orchestrator | 2026-03-11 00:54:26.641395 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2026-03-11 00:54:26.641399 | orchestrator | Wednesday 11 March 2026 00:51:49 +0000 (0:00:01.301) 0:03:31.592 ******* 2026-03-11 00:54:26.641402 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.641406 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.641410 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.641413 | orchestrator | 2026-03-11 00:54:26.641417 | orchestrator | TASK [include_role : placement] ************************************************ 2026-03-11 00:54:26.641421 | orchestrator | Wednesday 11 March 2026 00:51:51 +0000 (0:00:02.104) 0:03:33.697 ******* 2026-03-11 00:54:26.641425 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.641429 | orchestrator | 2026-03-11 00:54:26.641432 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2026-03-11 00:54:26.641436 | orchestrator | Wednesday 11 March 2026 00:51:52 +0000 (0:00:01.184) 0:03:34.881 ******* 2026-03-11 00:54:26.641443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.641547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.641556 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.641560 | orchestrator | 2026-03-11 00:54:26.641564 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2026-03-11 00:54:26.641568 | orchestrator | Wednesday 11 March 2026 00:51:56 +0000 (0:00:03.702) 0:03:38.584 ******* 2026-03-11 00:54:26.641572 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.641576 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.641580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.641588 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.641607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.641612 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.641615 | orchestrator | 2026-03-11 00:54:26.641619 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2026-03-11 00:54:26.641623 | orchestrator | Wednesday 11 March 2026 00:51:57 +0000 (0:00:00.523) 0:03:39.108 ******* 2026-03-11 00:54:26.641628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-11 00:54:26.641632 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-11 00:54:26.641636 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.641639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-11 00:54:26.641643 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-11 00:54:26.641647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-11 00:54:26.641651 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.641655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2026-03-11 00:54:26.641659 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.641663 | orchestrator | 2026-03-11 00:54:26.641666 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2026-03-11 00:54:26.641670 | orchestrator | Wednesday 11 March 2026 00:51:57 +0000 (0:00:00.823) 0:03:39.931 ******* 2026-03-11 00:54:26.641674 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.641678 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.641685 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.641689 | orchestrator | 2026-03-11 00:54:26.641693 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2026-03-11 00:54:26.641697 | orchestrator | Wednesday 11 March 2026 00:51:59 +0000 (0:00:01.893) 0:03:41.825 ******* 2026-03-11 00:54:26.641700 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.641704 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.641708 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.641712 | orchestrator | 2026-03-11 00:54:26.641716 | orchestrator | TASK [include_role : nova] ***************************************************** 2026-03-11 00:54:26.641720 | orchestrator | Wednesday 11 March 2026 00:52:01 +0000 (0:00:01.906) 0:03:43.731 ******* 2026-03-11 00:54:26.641723 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.641727 | orchestrator | 2026-03-11 00:54:26.641731 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2026-03-11 00:54:26.641735 | orchestrator | Wednesday 11 March 2026 00:52:03 +0000 (0:00:01.556) 0:03:45.288 ******* 2026-03-11 00:54:26.641742 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.641759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641765 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.641778 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641916 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.641926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641945 | orchestrator | 2026-03-11 00:54:26.641949 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2026-03-11 00:54:26.641954 | orchestrator | Wednesday 11 March 2026 00:52:07 +0000 (0:00:04.481) 0:03:49.769 ******* 2026-03-11 00:54:26.641958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.641965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641982 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.641987 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.641991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.641999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.642003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.642007 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.642033 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.642050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.642057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.642064 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.642068 | orchestrator | 2026-03-11 00:54:26.642072 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2026-03-11 00:54:26.642076 | orchestrator | Wednesday 11 March 2026 00:52:08 +0000 (0:00:01.244) 0:03:51.013 ******* 2026-03-11 00:54:26.642080 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-11 00:54:26.642085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-11 00:54:26.642089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-11 00:54:26.642093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-11 00:54:26.642097 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.642101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-11 00:54:26.642105 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-11 00:54:26.642109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-11 00:54:26.642113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-11 00:54:26.642116 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.642120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-11 00:54:26.642125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2026-03-11 00:54:26.642129 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-11 00:54:26.642133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2026-03-11 00:54:26.642137 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.642141 | orchestrator | 2026-03-11 00:54:26.642157 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2026-03-11 00:54:26.642217 | orchestrator | Wednesday 11 March 2026 00:52:09 +0000 (0:00:00.881) 0:03:51.895 ******* 2026-03-11 00:54:26.642233 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.642238 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.642245 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.642249 | orchestrator | 2026-03-11 00:54:26.642254 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2026-03-11 00:54:26.642258 | orchestrator | Wednesday 11 March 2026 00:52:11 +0000 (0:00:01.425) 0:03:53.320 ******* 2026-03-11 00:54:26.642263 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.642267 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.642271 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.642275 | orchestrator | 2026-03-11 00:54:26.642280 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2026-03-11 00:54:26.642284 | orchestrator | Wednesday 11 March 2026 00:52:13 +0000 (0:00:02.141) 0:03:55.461 ******* 2026-03-11 00:54:26.642288 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.642293 | orchestrator | 2026-03-11 00:54:26.642297 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2026-03-11 00:54:26.642301 | orchestrator | Wednesday 11 March 2026 00:52:14 +0000 (0:00:01.562) 0:03:57.024 ******* 2026-03-11 00:54:26.642306 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2026-03-11 00:54:26.642311 | orchestrator | 2026-03-11 00:54:26.642315 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2026-03-11 00:54:26.642320 | orchestrator | Wednesday 11 March 2026 00:52:15 +0000 (0:00:00.815) 0:03:57.839 ******* 2026-03-11 00:54:26.642325 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-11 00:54:26.642330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-11 00:54:26.642334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2026-03-11 00:54:26.642340 | orchestrator | 2026-03-11 00:54:26.642344 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2026-03-11 00:54:26.642350 | orchestrator | Wednesday 11 March 2026 00:52:20 +0000 (0:00:04.442) 0:04:02.282 ******* 2026-03-11 00:54:26.642357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:54:26.642362 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.642366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:54:26.642374 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.642395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:54:26.642401 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.642404 | orchestrator | 2026-03-11 00:54:26.642408 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2026-03-11 00:54:26.642412 | orchestrator | Wednesday 11 March 2026 00:52:21 +0000 (0:00:01.050) 0:04:03.332 ******* 2026-03-11 00:54:26.642416 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-11 00:54:26.642420 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-11 00:54:26.642425 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.642428 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-11 00:54:26.642433 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-11 00:54:26.642437 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.642440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-11 00:54:26.642444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2026-03-11 00:54:26.642448 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.642452 | orchestrator | 2026-03-11 00:54:26.642456 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-11 00:54:26.642460 | orchestrator | Wednesday 11 March 2026 00:52:22 +0000 (0:00:01.550) 0:04:04.883 ******* 2026-03-11 00:54:26.642463 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.642467 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.642471 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.642475 | orchestrator | 2026-03-11 00:54:26.642478 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-11 00:54:26.642482 | orchestrator | Wednesday 11 March 2026 00:52:25 +0000 (0:00:02.473) 0:04:07.356 ******* 2026-03-11 00:54:26.642486 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.642490 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.642497 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.642501 | orchestrator | 2026-03-11 00:54:26.642505 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2026-03-11 00:54:26.642509 | orchestrator | Wednesday 11 March 2026 00:52:28 +0000 (0:00:03.019) 0:04:10.376 ******* 2026-03-11 00:54:26.642513 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2026-03-11 00:54:26.642516 | orchestrator | 2026-03-11 00:54:26.642521 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2026-03-11 00:54:26.642525 | orchestrator | Wednesday 11 March 2026 00:52:29 +0000 (0:00:01.363) 0:04:11.740 ******* 2026-03-11 00:54:26.642531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:54:26.642535 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.642551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:54:26.642556 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.642560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:54:26.642564 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.642568 | orchestrator | 2026-03-11 00:54:26.642572 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2026-03-11 00:54:26.642575 | orchestrator | Wednesday 11 March 2026 00:52:30 +0000 (0:00:01.282) 0:04:13.023 ******* 2026-03-11 00:54:26.642580 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:54:26.642584 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.642588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:54:26.642596 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.642600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2026-03-11 00:54:26.642604 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.642608 | orchestrator | 2026-03-11 00:54:26.642612 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2026-03-11 00:54:26.642616 | orchestrator | Wednesday 11 March 2026 00:52:32 +0000 (0:00:01.293) 0:04:14.316 ******* 2026-03-11 00:54:26.642620 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.642624 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.642627 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.642631 | orchestrator | 2026-03-11 00:54:26.642635 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-11 00:54:26.642639 | orchestrator | Wednesday 11 March 2026 00:52:34 +0000 (0:00:01.813) 0:04:16.130 ******* 2026-03-11 00:54:26.642643 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.642648 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.642651 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.642656 | orchestrator | 2026-03-11 00:54:26.642662 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-11 00:54:26.642666 | orchestrator | Wednesday 11 March 2026 00:52:36 +0000 (0:00:02.274) 0:04:18.404 ******* 2026-03-11 00:54:26.642670 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.642674 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.642678 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.642682 | orchestrator | 2026-03-11 00:54:26.642685 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2026-03-11 00:54:26.642689 | orchestrator | Wednesday 11 March 2026 00:52:39 +0000 (0:00:02.769) 0:04:21.173 ******* 2026-03-11 00:54:26.642693 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2026-03-11 00:54:26.642697 | orchestrator | 2026-03-11 00:54:26.642701 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2026-03-11 00:54:26.642705 | orchestrator | Wednesday 11 March 2026 00:52:39 +0000 (0:00:00.839) 0:04:22.012 ******* 2026-03-11 00:54:26.642722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-11 00:54:26.642727 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.642731 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-11 00:54:26.642735 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.642741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-11 00:54:26.642754 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.642763 | orchestrator | 2026-03-11 00:54:26.642771 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2026-03-11 00:54:26.642780 | orchestrator | Wednesday 11 March 2026 00:52:41 +0000 (0:00:01.318) 0:04:23.331 ******* 2026-03-11 00:54:26.642786 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-11 00:54:26.642833 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.642842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-11 00:54:26.642848 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.642859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2026-03-11 00:54:26.642866 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.642872 | orchestrator | 2026-03-11 00:54:26.642877 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2026-03-11 00:54:26.642883 | orchestrator | Wednesday 11 March 2026 00:52:42 +0000 (0:00:01.306) 0:04:24.637 ******* 2026-03-11 00:54:26.642889 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.642894 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.642900 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.642907 | orchestrator | 2026-03-11 00:54:26.642914 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2026-03-11 00:54:26.642920 | orchestrator | Wednesday 11 March 2026 00:52:44 +0000 (0:00:01.517) 0:04:26.155 ******* 2026-03-11 00:54:26.642926 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.642955 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.642962 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.642968 | orchestrator | 2026-03-11 00:54:26.642974 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2026-03-11 00:54:26.642979 | orchestrator | Wednesday 11 March 2026 00:52:46 +0000 (0:00:02.441) 0:04:28.597 ******* 2026-03-11 00:54:26.642985 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.642991 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.642997 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.643004 | orchestrator | 2026-03-11 00:54:26.643010 | orchestrator | TASK [include_role : octavia] ************************************************** 2026-03-11 00:54:26.643021 | orchestrator | Wednesday 11 March 2026 00:52:49 +0000 (0:00:03.167) 0:04:31.765 ******* 2026-03-11 00:54:26.643025 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.643029 | orchestrator | 2026-03-11 00:54:26.643033 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2026-03-11 00:54:26.643037 | orchestrator | Wednesday 11 March 2026 00:52:51 +0000 (0:00:01.561) 0:04:33.326 ******* 2026-03-11 00:54:26.643041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.643046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 00:54:26.643050 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.643058 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.643062 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.643081 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.643089 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 00:54:26.643093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.643097 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.643101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.643108 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.643130 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 00:54:26.643136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.643140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.643144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.643148 | orchestrator | 2026-03-11 00:54:26.643152 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2026-03-11 00:54:26.643156 | orchestrator | Wednesday 11 March 2026 00:52:55 +0000 (0:00:03.723) 0:04:37.049 ******* 2026-03-11 00:54:26.643163 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.643167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 00:54:26.643187 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.643192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.643196 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.643200 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.643204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.643208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 00:54:26.643214 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.643232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.643237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.643241 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.643245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.643249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 00:54:26.643253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.643259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 00:54:26.643277 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 00:54:26.643282 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.643286 | orchestrator | 2026-03-11 00:54:26.643290 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2026-03-11 00:54:26.643294 | orchestrator | Wednesday 11 March 2026 00:52:55 +0000 (0:00:00.732) 0:04:37.782 ******* 2026-03-11 00:54:26.643297 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-11 00:54:26.643301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-11 00:54:26.643305 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.643309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-11 00:54:26.643313 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-11 00:54:26.643317 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.643321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-11 00:54:26.643324 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2026-03-11 00:54:26.643328 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.643332 | orchestrator | 2026-03-11 00:54:26.643337 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2026-03-11 00:54:26.643340 | orchestrator | Wednesday 11 March 2026 00:52:57 +0000 (0:00:01.516) 0:04:39.298 ******* 2026-03-11 00:54:26.643344 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.643348 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.643352 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.643355 | orchestrator | 2026-03-11 00:54:26.643359 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2026-03-11 00:54:26.643363 | orchestrator | Wednesday 11 March 2026 00:52:58 +0000 (0:00:01.327) 0:04:40.626 ******* 2026-03-11 00:54:26.643367 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.643370 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.643374 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.643378 | orchestrator | 2026-03-11 00:54:26.643382 | orchestrator | TASK [include_role : opensearch] *********************************************** 2026-03-11 00:54:26.643385 | orchestrator | Wednesday 11 March 2026 00:53:00 +0000 (0:00:02.084) 0:04:42.711 ******* 2026-03-11 00:54:26.643389 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.643396 | orchestrator | 2026-03-11 00:54:26.643400 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2026-03-11 00:54:26.643404 | orchestrator | Wednesday 11 March 2026 00:53:02 +0000 (0:00:01.785) 0:04:44.496 ******* 2026-03-11 00:54:26.643411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:54:26.643432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:54:26.643439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:54:26.643447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:54:26.643455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:54:26.643486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:54:26.643494 | orchestrator | 2026-03-11 00:54:26.643501 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2026-03-11 00:54:26.643508 | orchestrator | Wednesday 11 March 2026 00:53:07 +0000 (0:00:05.386) 0:04:49.882 ******* 2026-03-11 00:54:26.643514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:54:26.643521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:54:26.643533 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.643540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:54:26.643550 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:54:26.643575 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.643582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:54:26.643590 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:54:26.643602 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.643608 | orchestrator | 2026-03-11 00:54:26.643615 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2026-03-11 00:54:26.643622 | orchestrator | Wednesday 11 March 2026 00:53:08 +0000 (0:00:00.687) 0:04:50.570 ******* 2026-03-11 00:54:26.643627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-11 00:54:26.643634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-11 00:54:26.643639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-11 00:54:26.643645 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.643651 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-11 00:54:26.643658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-11 00:54:26.643668 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2026-03-11 00:54:26.643675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-11 00:54:26.643682 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.643688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-11 00:54:26.643713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2026-03-11 00:54:26.643721 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.643727 | orchestrator | 2026-03-11 00:54:26.643733 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2026-03-11 00:54:26.643739 | orchestrator | Wednesday 11 March 2026 00:53:09 +0000 (0:00:00.991) 0:04:51.561 ******* 2026-03-11 00:54:26.643746 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.643751 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.643755 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.643760 | orchestrator | 2026-03-11 00:54:26.643766 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2026-03-11 00:54:26.643772 | orchestrator | Wednesday 11 March 2026 00:53:10 +0000 (0:00:00.860) 0:04:52.422 ******* 2026-03-11 00:54:26.643779 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.643785 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.643807 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.643815 | orchestrator | 2026-03-11 00:54:26.643821 | orchestrator | TASK [include_role : prometheus] *********************************************** 2026-03-11 00:54:26.643828 | orchestrator | Wednesday 11 March 2026 00:53:11 +0000 (0:00:01.315) 0:04:53.738 ******* 2026-03-11 00:54:26.643843 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.643849 | orchestrator | 2026-03-11 00:54:26.643856 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2026-03-11 00:54:26.643862 | orchestrator | Wednesday 11 March 2026 00:53:13 +0000 (0:00:01.380) 0:04:55.119 ******* 2026-03-11 00:54:26.643869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 00:54:26.643877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 00:54:26.643884 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.643896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.643925 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 00:54:26.643932 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 00:54:26.643946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 00:54:26.643954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 00:54:26.643960 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.643967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 00:54:26.643976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.643984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 00:54:26.644024 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644031 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 00:54:26.644038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 00:54:26.644046 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-11 00:54:26.644056 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644074 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 00:54:26.644083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 00:54:26.644089 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-11 00:54:26.644095 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 00:54:26.644124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 00:54:26.644135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-11 00:54:26.644142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 00:54:26.644163 | orchestrator | 2026-03-11 00:54:26.644175 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2026-03-11 00:54:26.644182 | orchestrator | Wednesday 11 March 2026 00:53:17 +0000 (0:00:04.917) 0:05:00.036 ******* 2026-03-11 00:54:26.644192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-11 00:54:26.644205 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 00:54:26.644212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 00:54:26.644232 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-11 00:54:26.644242 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-11 00:54:26.644257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-11 00:54:26.644263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 00:54:26.644276 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 00:54:26.644298 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.644305 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 00:54:26.644322 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-11 00:54:26.644327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-11 00:54:26.644331 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-11 00:54:26.644340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 00:54:26.644348 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644362 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644366 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 00:54:26.644370 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.644374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 00:54:26.644378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-11 00:54:26.644394 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2026-03-11 00:54:26.644399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 00:54:26.644406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 00:54:26.644410 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.644414 | orchestrator | 2026-03-11 00:54:26.644418 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2026-03-11 00:54:26.644422 | orchestrator | Wednesday 11 March 2026 00:53:18 +0000 (0:00:00.836) 0:05:00.873 ******* 2026-03-11 00:54:26.644426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-11 00:54:26.644431 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-11 00:54:26.644435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-11 00:54:26.644439 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-11 00:54:26.644447 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.644451 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-11 00:54:26.644455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-11 00:54:26.644461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-11 00:54:26.644465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-11 00:54:26.644469 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.644476 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2026-03-11 00:54:26.644480 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2026-03-11 00:54:26.644484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-11 00:54:26.644488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2026-03-11 00:54:26.644492 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.644495 | orchestrator | 2026-03-11 00:54:26.644499 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2026-03-11 00:54:26.644503 | orchestrator | Wednesday 11 March 2026 00:53:19 +0000 (0:00:00.995) 0:05:01.868 ******* 2026-03-11 00:54:26.644507 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.644511 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.644515 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.644518 | orchestrator | 2026-03-11 00:54:26.644522 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2026-03-11 00:54:26.644526 | orchestrator | Wednesday 11 March 2026 00:53:20 +0000 (0:00:00.466) 0:05:02.335 ******* 2026-03-11 00:54:26.644530 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.644534 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.644538 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.644541 | orchestrator | 2026-03-11 00:54:26.644545 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2026-03-11 00:54:26.644549 | orchestrator | Wednesday 11 March 2026 00:53:21 +0000 (0:00:01.640) 0:05:03.976 ******* 2026-03-11 00:54:26.644553 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.644557 | orchestrator | 2026-03-11 00:54:26.644561 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2026-03-11 00:54:26.644565 | orchestrator | Wednesday 11 March 2026 00:53:23 +0000 (0:00:01.800) 0:05:05.776 ******* 2026-03-11 00:54:26.644572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:54:26.644580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:54:26.644587 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2026-03-11 00:54:26.644591 | orchestrator | 2026-03-11 00:54:26.644595 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2026-03-11 00:54:26.644599 | orchestrator | Wednesday 11 March 2026 00:53:26 +0000 (0:00:02.577) 0:05:08.353 ******* 2026-03-11 00:54:26.644603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-11 00:54:26.644609 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.644613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-11 00:54:26.644617 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.644623 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2026-03-11 00:54:26.644627 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.644631 | orchestrator | 2026-03-11 00:54:26.644635 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2026-03-11 00:54:26.644641 | orchestrator | Wednesday 11 March 2026 00:53:27 +0000 (0:00:00.748) 0:05:09.103 ******* 2026-03-11 00:54:26.644645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-11 00:54:26.644649 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.644654 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-11 00:54:26.644658 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.644661 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2026-03-11 00:54:26.644665 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.644669 | orchestrator | 2026-03-11 00:54:26.644673 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2026-03-11 00:54:26.644677 | orchestrator | Wednesday 11 March 2026 00:53:27 +0000 (0:00:00.649) 0:05:09.752 ******* 2026-03-11 00:54:26.644680 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.644684 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.644688 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.644703 | orchestrator | 2026-03-11 00:54:26.644706 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2026-03-11 00:54:26.644710 | orchestrator | Wednesday 11 March 2026 00:53:28 +0000 (0:00:00.458) 0:05:10.211 ******* 2026-03-11 00:54:26.644718 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.644722 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.644732 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.644736 | orchestrator | 2026-03-11 00:54:26.644739 | orchestrator | TASK [include_role : skyline] ************************************************** 2026-03-11 00:54:26.644743 | orchestrator | Wednesday 11 March 2026 00:53:29 +0000 (0:00:01.323) 0:05:11.534 ******* 2026-03-11 00:54:26.644747 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:54:26.644751 | orchestrator | 2026-03-11 00:54:26.644755 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2026-03-11 00:54:26.644759 | orchestrator | Wednesday 11 March 2026 00:53:31 +0000 (0:00:01.788) 0:05:13.322 ******* 2026-03-11 00:54:26.644763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.644770 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.644777 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.644781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.644788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.644805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2026-03-11 00:54:26.644812 | orchestrator | 2026-03-11 00:54:26.644819 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2026-03-11 00:54:26.644824 | orchestrator | Wednesday 11 March 2026 00:53:36 +0000 (0:00:05.708) 0:05:19.031 ******* 2026-03-11 00:54:26.644837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.644844 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.644854 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.644860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.644866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.644872 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.644882 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.644892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2026-03-11 00:54:26.644903 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.644908 | orchestrator | 2026-03-11 00:54:26.644914 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2026-03-11 00:54:26.644919 | orchestrator | Wednesday 11 March 2026 00:53:37 +0000 (0:00:00.574) 0:05:19.605 ******* 2026-03-11 00:54:26.644925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-11 00:54:26.644931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-11 00:54:26.644938 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-11 00:54:26.644944 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-11 00:54:26.644950 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.644957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-11 00:54:26.644962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-11 00:54:26.644966 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-11 00:54:26.644970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-11 00:54:26.644974 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.644977 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-11 00:54:26.644981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2026-03-11 00:54:26.644988 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-11 00:54:26.644992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2026-03-11 00:54:26.644996 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.644999 | orchestrator | 2026-03-11 00:54:26.645003 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2026-03-11 00:54:26.645012 | orchestrator | Wednesday 11 March 2026 00:53:39 +0000 (0:00:01.714) 0:05:21.320 ******* 2026-03-11 00:54:26.645016 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.645020 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.645023 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.645027 | orchestrator | 2026-03-11 00:54:26.645031 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2026-03-11 00:54:26.645038 | orchestrator | Wednesday 11 March 2026 00:53:40 +0000 (0:00:01.320) 0:05:22.640 ******* 2026-03-11 00:54:26.645042 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.645045 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.645049 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.645053 | orchestrator | 2026-03-11 00:54:26.645057 | orchestrator | TASK [include_role : swift] **************************************************** 2026-03-11 00:54:26.645060 | orchestrator | Wednesday 11 March 2026 00:53:42 +0000 (0:00:02.166) 0:05:24.807 ******* 2026-03-11 00:54:26.645064 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.645068 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.645072 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.645076 | orchestrator | 2026-03-11 00:54:26.645079 | orchestrator | TASK [include_role : tacker] *************************************************** 2026-03-11 00:54:26.645083 | orchestrator | Wednesday 11 March 2026 00:53:43 +0000 (0:00:00.342) 0:05:25.150 ******* 2026-03-11 00:54:26.645087 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.645091 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.645094 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.645098 | orchestrator | 2026-03-11 00:54:26.645102 | orchestrator | TASK [include_role : trove] **************************************************** 2026-03-11 00:54:26.645106 | orchestrator | Wednesday 11 March 2026 00:53:43 +0000 (0:00:00.296) 0:05:25.446 ******* 2026-03-11 00:54:26.645110 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.645114 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.645117 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.645121 | orchestrator | 2026-03-11 00:54:26.645125 | orchestrator | TASK [include_role : venus] **************************************************** 2026-03-11 00:54:26.645129 | orchestrator | Wednesday 11 March 2026 00:53:44 +0000 (0:00:00.633) 0:05:26.079 ******* 2026-03-11 00:54:26.645133 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.645137 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.645143 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.645149 | orchestrator | 2026-03-11 00:54:26.645155 | orchestrator | TASK [include_role : watcher] ************************************************** 2026-03-11 00:54:26.645160 | orchestrator | Wednesday 11 March 2026 00:53:44 +0000 (0:00:00.307) 0:05:26.387 ******* 2026-03-11 00:54:26.645166 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.645173 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.645179 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.645185 | orchestrator | 2026-03-11 00:54:26.645192 | orchestrator | TASK [include_role : zun] ****************************************************** 2026-03-11 00:54:26.645198 | orchestrator | Wednesday 11 March 2026 00:53:44 +0000 (0:00:00.299) 0:05:26.687 ******* 2026-03-11 00:54:26.645202 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.645206 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.645210 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.645213 | orchestrator | 2026-03-11 00:54:26.645217 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2026-03-11 00:54:26.645221 | orchestrator | Wednesday 11 March 2026 00:53:45 +0000 (0:00:00.905) 0:05:27.592 ******* 2026-03-11 00:54:26.645225 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.645229 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.645233 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.645236 | orchestrator | 2026-03-11 00:54:26.645240 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2026-03-11 00:54:26.645244 | orchestrator | Wednesday 11 March 2026 00:53:46 +0000 (0:00:00.716) 0:05:28.309 ******* 2026-03-11 00:54:26.645252 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.645256 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.645260 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.645263 | orchestrator | 2026-03-11 00:54:26.645267 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2026-03-11 00:54:26.645271 | orchestrator | Wednesday 11 March 2026 00:53:46 +0000 (0:00:00.318) 0:05:28.628 ******* 2026-03-11 00:54:26.645275 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.645279 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.645282 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.645286 | orchestrator | 2026-03-11 00:54:26.645290 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2026-03-11 00:54:26.645294 | orchestrator | Wednesday 11 March 2026 00:53:47 +0000 (0:00:00.968) 0:05:29.597 ******* 2026-03-11 00:54:26.645298 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.645301 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.645305 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.645309 | orchestrator | 2026-03-11 00:54:26.645313 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2026-03-11 00:54:26.645316 | orchestrator | Wednesday 11 March 2026 00:53:48 +0000 (0:00:01.238) 0:05:30.835 ******* 2026-03-11 00:54:26.645320 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.645324 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.645328 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.645331 | orchestrator | 2026-03-11 00:54:26.645335 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2026-03-11 00:54:26.645339 | orchestrator | Wednesday 11 March 2026 00:53:49 +0000 (0:00:00.920) 0:05:31.755 ******* 2026-03-11 00:54:26.645343 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.645347 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.645351 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.645354 | orchestrator | 2026-03-11 00:54:26.645358 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2026-03-11 00:54:26.645362 | orchestrator | Wednesday 11 March 2026 00:53:54 +0000 (0:00:04.597) 0:05:36.352 ******* 2026-03-11 00:54:26.645366 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.645370 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.645373 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.645377 | orchestrator | 2026-03-11 00:54:26.645381 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2026-03-11 00:54:26.645385 | orchestrator | Wednesday 11 March 2026 00:53:57 +0000 (0:00:02.795) 0:05:39.148 ******* 2026-03-11 00:54:26.645388 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.645392 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.645396 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.645400 | orchestrator | 2026-03-11 00:54:26.645404 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2026-03-11 00:54:26.645408 | orchestrator | Wednesday 11 March 2026 00:54:11 +0000 (0:00:14.055) 0:05:53.204 ******* 2026-03-11 00:54:26.645412 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.645419 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.645423 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.645427 | orchestrator | 2026-03-11 00:54:26.645439 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2026-03-11 00:54:26.645466 | orchestrator | Wednesday 11 March 2026 00:54:11 +0000 (0:00:00.784) 0:05:53.988 ******* 2026-03-11 00:54:26.645471 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:54:26.645475 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:54:26.645479 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:54:26.645482 | orchestrator | 2026-03-11 00:54:26.645486 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2026-03-11 00:54:26.645490 | orchestrator | Wednesday 11 March 2026 00:54:19 +0000 (0:00:08.009) 0:06:01.998 ******* 2026-03-11 00:54:26.645494 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.645498 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.645505 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.645509 | orchestrator | 2026-03-11 00:54:26.645513 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2026-03-11 00:54:26.645517 | orchestrator | Wednesday 11 March 2026 00:54:20 +0000 (0:00:00.335) 0:06:02.333 ******* 2026-03-11 00:54:26.645521 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.645524 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.645528 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.645532 | orchestrator | 2026-03-11 00:54:26.645535 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2026-03-11 00:54:26.645539 | orchestrator | Wednesday 11 March 2026 00:54:20 +0000 (0:00:00.700) 0:06:03.034 ******* 2026-03-11 00:54:26.645543 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.645547 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.645551 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.645554 | orchestrator | 2026-03-11 00:54:26.645558 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2026-03-11 00:54:26.645562 | orchestrator | Wednesday 11 March 2026 00:54:21 +0000 (0:00:00.352) 0:06:03.386 ******* 2026-03-11 00:54:26.645566 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.645569 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.645573 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.645577 | orchestrator | 2026-03-11 00:54:26.645581 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2026-03-11 00:54:26.645584 | orchestrator | Wednesday 11 March 2026 00:54:21 +0000 (0:00:00.387) 0:06:03.774 ******* 2026-03-11 00:54:26.645588 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.645592 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.645596 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.645600 | orchestrator | 2026-03-11 00:54:26.645604 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2026-03-11 00:54:26.645607 | orchestrator | Wednesday 11 March 2026 00:54:22 +0000 (0:00:00.335) 0:06:04.109 ******* 2026-03-11 00:54:26.645612 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:54:26.645615 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:54:26.645619 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:54:26.645623 | orchestrator | 2026-03-11 00:54:26.645627 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2026-03-11 00:54:26.645630 | orchestrator | Wednesday 11 March 2026 00:54:22 +0000 (0:00:00.358) 0:06:04.467 ******* 2026-03-11 00:54:26.645634 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.645638 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.645642 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.645645 | orchestrator | 2026-03-11 00:54:26.645649 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2026-03-11 00:54:26.645653 | orchestrator | Wednesday 11 March 2026 00:54:23 +0000 (0:00:01.311) 0:06:05.778 ******* 2026-03-11 00:54:26.645657 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:54:26.645660 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:54:26.645664 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:54:26.645668 | orchestrator | 2026-03-11 00:54:26.645672 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:54:26.645676 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-11 00:54:26.645680 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-11 00:54:26.645684 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2026-03-11 00:54:26.645688 | orchestrator | 2026-03-11 00:54:26.645691 | orchestrator | 2026-03-11 00:54:26.645695 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:54:26.645705 | orchestrator | Wednesday 11 March 2026 00:54:24 +0000 (0:00:00.840) 0:06:06.619 ******* 2026-03-11 00:54:26.645708 | orchestrator | =============================================================================== 2026-03-11 00:54:26.645714 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 14.06s 2026-03-11 00:54:26.645718 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 8.01s 2026-03-11 00:54:26.645722 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 5.71s 2026-03-11 00:54:26.645726 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 5.48s 2026-03-11 00:54:26.645730 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.39s 2026-03-11 00:54:26.645733 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.15s 2026-03-11 00:54:26.645737 | orchestrator | haproxy-config : Copying over cinder haproxy config --------------------- 5.10s 2026-03-11 00:54:26.645741 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.92s 2026-03-11 00:54:26.645745 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 4.60s 2026-03-11 00:54:26.645751 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.50s 2026-03-11 00:54:26.645757 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.48s 2026-03-11 00:54:26.645764 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 4.44s 2026-03-11 00:54:26.645770 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.23s 2026-03-11 00:54:26.645776 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.19s 2026-03-11 00:54:26.645782 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 4.07s 2026-03-11 00:54:26.645788 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 3.94s 2026-03-11 00:54:26.645834 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 3.84s 2026-03-11 00:54:26.645840 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 3.72s 2026-03-11 00:54:26.645847 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.71s 2026-03-11 00:54:26.645853 | orchestrator | haproxy-config : Copying over placement haproxy config ------------------ 3.70s 2026-03-11 00:54:26.645859 | orchestrator | 2026-03-11 00:54:26 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:26.645867 | orchestrator | 2026-03-11 00:54:26 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:54:26.645873 | orchestrator | 2026-03-11 00:54:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:29.676916 | orchestrator | 2026-03-11 00:54:29 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:54:29.679533 | orchestrator | 2026-03-11 00:54:29 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:29.680142 | orchestrator | 2026-03-11 00:54:29 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:54:29.680193 | orchestrator | 2026-03-11 00:54:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:32.713212 | orchestrator | 2026-03-11 00:54:32 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:54:32.714292 | orchestrator | 2026-03-11 00:54:32 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:32.717261 | orchestrator | 2026-03-11 00:54:32 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:54:32.717306 | orchestrator | 2026-03-11 00:54:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:35.749395 | orchestrator | 2026-03-11 00:54:35 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:54:35.751258 | orchestrator | 2026-03-11 00:54:35 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:35.752612 | orchestrator | 2026-03-11 00:54:35 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:54:35.752775 | orchestrator | 2026-03-11 00:54:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:38.800412 | orchestrator | 2026-03-11 00:54:38 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:54:38.801017 | orchestrator | 2026-03-11 00:54:38 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:38.801581 | orchestrator | 2026-03-11 00:54:38 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:54:38.801605 | orchestrator | 2026-03-11 00:54:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:41.833152 | orchestrator | 2026-03-11 00:54:41 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:54:41.833346 | orchestrator | 2026-03-11 00:54:41 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:41.833955 | orchestrator | 2026-03-11 00:54:41 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:54:41.833999 | orchestrator | 2026-03-11 00:54:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:44.862122 | orchestrator | 2026-03-11 00:54:44 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:54:44.862262 | orchestrator | 2026-03-11 00:54:44 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:44.862894 | orchestrator | 2026-03-11 00:54:44 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:54:44.862909 | orchestrator | 2026-03-11 00:54:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:47.904303 | orchestrator | 2026-03-11 00:54:47 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:54:47.906974 | orchestrator | 2026-03-11 00:54:47 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:47.907016 | orchestrator | 2026-03-11 00:54:47 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:54:47.907020 | orchestrator | 2026-03-11 00:54:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:50.933420 | orchestrator | 2026-03-11 00:54:50 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:54:50.936792 | orchestrator | 2026-03-11 00:54:50 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:50.937355 | orchestrator | 2026-03-11 00:54:50 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:54:50.937389 | orchestrator | 2026-03-11 00:54:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:53.966282 | orchestrator | 2026-03-11 00:54:53 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:54:53.967568 | orchestrator | 2026-03-11 00:54:53 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:53.969730 | orchestrator | 2026-03-11 00:54:53 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:54:53.969836 | orchestrator | 2026-03-11 00:54:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:54:57.019173 | orchestrator | 2026-03-11 00:54:57 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:54:57.019641 | orchestrator | 2026-03-11 00:54:57 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:54:57.020962 | orchestrator | 2026-03-11 00:54:57 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:54:57.020995 | orchestrator | 2026-03-11 00:54:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:00.065582 | orchestrator | 2026-03-11 00:55:00 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:00.065903 | orchestrator | 2026-03-11 00:55:00 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:00.067248 | orchestrator | 2026-03-11 00:55:00 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:00.067697 | orchestrator | 2026-03-11 00:55:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:03.120121 | orchestrator | 2026-03-11 00:55:03 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:03.124815 | orchestrator | 2026-03-11 00:55:03 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:03.127241 | orchestrator | 2026-03-11 00:55:03 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:03.128047 | orchestrator | 2026-03-11 00:55:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:06.183205 | orchestrator | 2026-03-11 00:55:06 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:06.186216 | orchestrator | 2026-03-11 00:55:06 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:06.188328 | orchestrator | 2026-03-11 00:55:06 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:06.188382 | orchestrator | 2026-03-11 00:55:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:09.236493 | orchestrator | 2026-03-11 00:55:09 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:09.238084 | orchestrator | 2026-03-11 00:55:09 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:09.240705 | orchestrator | 2026-03-11 00:55:09 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:09.240966 | orchestrator | 2026-03-11 00:55:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:12.281866 | orchestrator | 2026-03-11 00:55:12 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:12.284249 | orchestrator | 2026-03-11 00:55:12 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:12.285641 | orchestrator | 2026-03-11 00:55:12 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:12.285689 | orchestrator | 2026-03-11 00:55:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:15.330356 | orchestrator | 2026-03-11 00:55:15 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:15.331019 | orchestrator | 2026-03-11 00:55:15 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:15.332120 | orchestrator | 2026-03-11 00:55:15 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:15.332803 | orchestrator | 2026-03-11 00:55:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:18.378999 | orchestrator | 2026-03-11 00:55:18 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:18.380277 | orchestrator | 2026-03-11 00:55:18 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:18.381779 | orchestrator | 2026-03-11 00:55:18 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:18.381822 | orchestrator | 2026-03-11 00:55:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:21.437361 | orchestrator | 2026-03-11 00:55:21 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:21.439436 | orchestrator | 2026-03-11 00:55:21 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:21.441352 | orchestrator | 2026-03-11 00:55:21 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:21.441409 | orchestrator | 2026-03-11 00:55:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:24.489380 | orchestrator | 2026-03-11 00:55:24 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:24.490517 | orchestrator | 2026-03-11 00:55:24 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:24.491477 | orchestrator | 2026-03-11 00:55:24 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:24.491502 | orchestrator | 2026-03-11 00:55:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:27.542270 | orchestrator | 2026-03-11 00:55:27 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:27.543345 | orchestrator | 2026-03-11 00:55:27 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:27.545553 | orchestrator | 2026-03-11 00:55:27 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:27.545609 | orchestrator | 2026-03-11 00:55:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:30.595624 | orchestrator | 2026-03-11 00:55:30 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:30.596223 | orchestrator | 2026-03-11 00:55:30 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:30.597044 | orchestrator | 2026-03-11 00:55:30 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:30.597092 | orchestrator | 2026-03-11 00:55:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:33.639178 | orchestrator | 2026-03-11 00:55:33 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:33.639247 | orchestrator | 2026-03-11 00:55:33 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:33.640120 | orchestrator | 2026-03-11 00:55:33 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:33.640167 | orchestrator | 2026-03-11 00:55:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:36.703007 | orchestrator | 2026-03-11 00:55:36 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:36.705161 | orchestrator | 2026-03-11 00:55:36 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:36.707038 | orchestrator | 2026-03-11 00:55:36 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:36.707109 | orchestrator | 2026-03-11 00:55:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:39.749058 | orchestrator | 2026-03-11 00:55:39 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:39.750602 | orchestrator | 2026-03-11 00:55:39 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:39.752905 | orchestrator | 2026-03-11 00:55:39 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:39.752983 | orchestrator | 2026-03-11 00:55:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:42.846289 | orchestrator | 2026-03-11 00:55:42 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:42.849497 | orchestrator | 2026-03-11 00:55:42 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:42.851265 | orchestrator | 2026-03-11 00:55:42 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:42.851333 | orchestrator | 2026-03-11 00:55:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:45.907588 | orchestrator | 2026-03-11 00:55:45 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:45.908865 | orchestrator | 2026-03-11 00:55:45 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:45.910470 | orchestrator | 2026-03-11 00:55:45 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:45.910844 | orchestrator | 2026-03-11 00:55:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:48.958403 | orchestrator | 2026-03-11 00:55:48 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:48.960599 | orchestrator | 2026-03-11 00:55:48 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:48.962922 | orchestrator | 2026-03-11 00:55:48 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:48.962975 | orchestrator | 2026-03-11 00:55:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:52.012964 | orchestrator | 2026-03-11 00:55:52 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:52.014389 | orchestrator | 2026-03-11 00:55:52 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:52.016298 | orchestrator | 2026-03-11 00:55:52 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:52.016364 | orchestrator | 2026-03-11 00:55:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:55.060551 | orchestrator | 2026-03-11 00:55:55 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:55.061976 | orchestrator | 2026-03-11 00:55:55 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:55.063626 | orchestrator | 2026-03-11 00:55:55 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:55.063675 | orchestrator | 2026-03-11 00:55:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:55:58.115159 | orchestrator | 2026-03-11 00:55:58 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:55:58.116772 | orchestrator | 2026-03-11 00:55:58 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:55:58.117482 | orchestrator | 2026-03-11 00:55:58 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:55:58.117509 | orchestrator | 2026-03-11 00:55:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:01.165653 | orchestrator | 2026-03-11 00:56:01 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:01.168548 | orchestrator | 2026-03-11 00:56:01 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:56:01.172798 | orchestrator | 2026-03-11 00:56:01 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:01.173500 | orchestrator | 2026-03-11 00:56:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:04.220681 | orchestrator | 2026-03-11 00:56:04 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:04.221577 | orchestrator | 2026-03-11 00:56:04 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:56:04.223420 | orchestrator | 2026-03-11 00:56:04 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:04.223487 | orchestrator | 2026-03-11 00:56:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:07.273951 | orchestrator | 2026-03-11 00:56:07 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:07.276115 | orchestrator | 2026-03-11 00:56:07 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:56:07.278617 | orchestrator | 2026-03-11 00:56:07 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:07.278683 | orchestrator | 2026-03-11 00:56:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:10.324524 | orchestrator | 2026-03-11 00:56:10 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:10.326324 | orchestrator | 2026-03-11 00:56:10 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:56:10.328261 | orchestrator | 2026-03-11 00:56:10 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:10.328294 | orchestrator | 2026-03-11 00:56:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:13.373013 | orchestrator | 2026-03-11 00:56:13 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:13.374861 | orchestrator | 2026-03-11 00:56:13 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state STARTED 2026-03-11 00:56:13.377118 | orchestrator | 2026-03-11 00:56:13 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:13.377158 | orchestrator | 2026-03-11 00:56:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:16.413627 | orchestrator | 2026-03-11 00:56:16 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:16.420902 | orchestrator | 2026-03-11 00:56:16 | INFO  | Task 53228579-4d9e-48e0-8e1c-b1dc741d500b is in state SUCCESS 2026-03-11 00:56:16.422453 | orchestrator | 2026-03-11 00:56:16.422517 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-11 00:56:16.422524 | orchestrator | 2.16.14 2026-03-11 00:56:16.422529 | orchestrator | 2026-03-11 00:56:16.422534 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2026-03-11 00:56:16.422539 | orchestrator | 2026-03-11 00:56:16.422543 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-11 00:56:16.422548 | orchestrator | Wednesday 11 March 2026 00:45:38 +0000 (0:00:00.646) 0:00:00.646 ******* 2026-03-11 00:56:16.422553 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.422558 | orchestrator | 2026-03-11 00:56:16.422648 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-11 00:56:16.422658 | orchestrator | Wednesday 11 March 2026 00:45:39 +0000 (0:00:01.006) 0:00:01.653 ******* 2026-03-11 00:56:16.422664 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.422671 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.422676 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.422683 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.422688 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.422694 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.422700 | orchestrator | 2026-03-11 00:56:16.423093 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-11 00:56:16.423110 | orchestrator | Wednesday 11 March 2026 00:45:41 +0000 (0:00:01.879) 0:00:03.532 ******* 2026-03-11 00:56:16.423114 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.423118 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.423122 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.423126 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.423129 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.423133 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.423139 | orchestrator | 2026-03-11 00:56:16.423145 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-11 00:56:16.423151 | orchestrator | Wednesday 11 March 2026 00:45:42 +0000 (0:00:00.789) 0:00:04.322 ******* 2026-03-11 00:56:16.423157 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.423162 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.423168 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.423173 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.423179 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.423184 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.423190 | orchestrator | 2026-03-11 00:56:16.423197 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-11 00:56:16.423203 | orchestrator | Wednesday 11 March 2026 00:45:43 +0000 (0:00:01.114) 0:00:05.436 ******* 2026-03-11 00:56:16.423209 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.423215 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.423221 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.423227 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.423233 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.423239 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.423243 | orchestrator | 2026-03-11 00:56:16.423247 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-11 00:56:16.423251 | orchestrator | Wednesday 11 March 2026 00:45:44 +0000 (0:00:00.708) 0:00:06.145 ******* 2026-03-11 00:56:16.423255 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.423259 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.423263 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.423267 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.423270 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.423274 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.423278 | orchestrator | 2026-03-11 00:56:16.423282 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-11 00:56:16.423297 | orchestrator | Wednesday 11 March 2026 00:45:45 +0000 (0:00:01.133) 0:00:07.278 ******* 2026-03-11 00:56:16.423301 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.423305 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.423308 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.423313 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.423316 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.423320 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.423325 | orchestrator | 2026-03-11 00:56:16.423340 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-11 00:56:16.423347 | orchestrator | Wednesday 11 March 2026 00:45:46 +0000 (0:00:01.467) 0:00:08.745 ******* 2026-03-11 00:56:16.423354 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.423361 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.423403 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.423408 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.423414 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.423419 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.423424 | orchestrator | 2026-03-11 00:56:16.423430 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-11 00:56:16.423472 | orchestrator | Wednesday 11 March 2026 00:45:47 +0000 (0:00:01.001) 0:00:09.747 ******* 2026-03-11 00:56:16.423480 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.423485 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.423491 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.423507 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.423513 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.423518 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.423524 | orchestrator | 2026-03-11 00:56:16.423530 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-11 00:56:16.423536 | orchestrator | Wednesday 11 March 2026 00:45:49 +0000 (0:00:01.106) 0:00:10.853 ******* 2026-03-11 00:56:16.423542 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:56:16.423549 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:56:16.423554 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:56:16.423560 | orchestrator | 2026-03-11 00:56:16.423565 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-11 00:56:16.423570 | orchestrator | Wednesday 11 March 2026 00:45:49 +0000 (0:00:00.670) 0:00:11.523 ******* 2026-03-11 00:56:16.423576 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.423582 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.423588 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.423610 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.423617 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.423622 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.423628 | orchestrator | 2026-03-11 00:56:16.423635 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-11 00:56:16.423641 | orchestrator | Wednesday 11 March 2026 00:45:51 +0000 (0:00:01.386) 0:00:12.909 ******* 2026-03-11 00:56:16.423647 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:56:16.423654 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:56:16.423660 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:56:16.423666 | orchestrator | 2026-03-11 00:56:16.423672 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-11 00:56:16.423678 | orchestrator | Wednesday 11 March 2026 00:45:53 +0000 (0:00:02.683) 0:00:15.593 ******* 2026-03-11 00:56:16.423684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-11 00:56:16.423690 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-11 00:56:16.423695 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-11 00:56:16.423700 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.423704 | orchestrator | 2026-03-11 00:56:16.423730 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-11 00:56:16.423735 | orchestrator | Wednesday 11 March 2026 00:45:54 +0000 (0:00:00.930) 0:00:16.524 ******* 2026-03-11 00:56:16.423741 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.423749 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.423754 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.423758 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.423763 | orchestrator | 2026-03-11 00:56:16.423767 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-11 00:56:16.423771 | orchestrator | Wednesday 11 March 2026 00:45:55 +0000 (0:00:01.045) 0:00:17.570 ******* 2026-03-11 00:56:16.424215 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.424245 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.424252 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.424258 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.424265 | orchestrator | 2026-03-11 00:56:16.424271 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-11 00:56:16.424275 | orchestrator | Wednesday 11 March 2026 00:45:56 +0000 (0:00:00.357) 0:00:17.928 ******* 2026-03-11 00:56:16.424302 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-11 00:45:51.725367', 'end': '2026-03-11 00:45:51.815804', 'delta': '0:00:00.090437', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.424310 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-11 00:45:52.825510', 'end': '2026-03-11 00:45:52.918047', 'delta': '0:00:00.092537', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.424314 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-11 00:45:53.564178', 'end': '2026-03-11 00:45:53.651191', 'delta': '0:00:00.087013', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.424318 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.424322 | orchestrator | 2026-03-11 00:56:16.424326 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-11 00:56:16.424336 | orchestrator | Wednesday 11 March 2026 00:45:56 +0000 (0:00:00.222) 0:00:18.150 ******* 2026-03-11 00:56:16.424340 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.424343 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.424347 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.424351 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.424355 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.424358 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.424362 | orchestrator | 2026-03-11 00:56:16.424366 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-11 00:56:16.424370 | orchestrator | Wednesday 11 March 2026 00:45:58 +0000 (0:00:01.976) 0:00:20.129 ******* 2026-03-11 00:56:16.424373 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:16.424377 | orchestrator | 2026-03-11 00:56:16.424381 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-11 00:56:16.424385 | orchestrator | Wednesday 11 March 2026 00:45:59 +0000 (0:00:01.511) 0:00:21.640 ******* 2026-03-11 00:56:16.424393 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.424397 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.424400 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.424404 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.424408 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.424412 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.424417 | orchestrator | 2026-03-11 00:56:16.424423 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-11 00:56:16.424427 | orchestrator | Wednesday 11 March 2026 00:46:02 +0000 (0:00:02.285) 0:00:23.926 ******* 2026-03-11 00:56:16.424431 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.424434 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.424438 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.424442 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.424445 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.424449 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.424453 | orchestrator | 2026-03-11 00:56:16.424456 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-11 00:56:16.424460 | orchestrator | Wednesday 11 March 2026 00:46:03 +0000 (0:00:01.734) 0:00:25.660 ******* 2026-03-11 00:56:16.424464 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.424467 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.424471 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.424476 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.424481 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.424487 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.424493 | orchestrator | 2026-03-11 00:56:16.424498 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-11 00:56:16.424504 | orchestrator | Wednesday 11 March 2026 00:46:04 +0000 (0:00:01.113) 0:00:26.774 ******* 2026-03-11 00:56:16.424510 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.424515 | orchestrator | 2026-03-11 00:56:16.424521 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-11 00:56:16.424527 | orchestrator | Wednesday 11 March 2026 00:46:05 +0000 (0:00:00.220) 0:00:26.994 ******* 2026-03-11 00:56:16.424533 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.424538 | orchestrator | 2026-03-11 00:56:16.424544 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-11 00:56:16.424550 | orchestrator | Wednesday 11 March 2026 00:46:05 +0000 (0:00:00.243) 0:00:27.238 ******* 2026-03-11 00:56:16.424556 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.424562 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.424568 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.424592 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.424599 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.424604 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.424613 | orchestrator | 2026-03-11 00:56:16.424628 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-11 00:56:16.424634 | orchestrator | Wednesday 11 March 2026 00:46:06 +0000 (0:00:00.758) 0:00:27.997 ******* 2026-03-11 00:56:16.424640 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.424645 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.424652 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.424658 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.424663 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.424670 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.424675 | orchestrator | 2026-03-11 00:56:16.424682 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-11 00:56:16.424688 | orchestrator | Wednesday 11 March 2026 00:46:07 +0000 (0:00:01.036) 0:00:29.033 ******* 2026-03-11 00:56:16.424694 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.424698 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.424702 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.424705 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.424732 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.424738 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.424744 | orchestrator | 2026-03-11 00:56:16.424750 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-11 00:56:16.424757 | orchestrator | Wednesday 11 March 2026 00:46:08 +0000 (0:00:00.997) 0:00:30.030 ******* 2026-03-11 00:56:16.424763 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.424769 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.424775 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.424781 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.424787 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.424792 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.424798 | orchestrator | 2026-03-11 00:56:16.424804 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-11 00:56:16.424811 | orchestrator | Wednesday 11 March 2026 00:46:09 +0000 (0:00:01.091) 0:00:31.122 ******* 2026-03-11 00:56:16.424815 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.424820 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.424824 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.424828 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.424832 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.424837 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.424841 | orchestrator | 2026-03-11 00:56:16.424845 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-11 00:56:16.424849 | orchestrator | Wednesday 11 March 2026 00:46:09 +0000 (0:00:00.676) 0:00:31.799 ******* 2026-03-11 00:56:16.424854 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.424858 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.424863 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.424867 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.424871 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.424875 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.425138 | orchestrator | 2026-03-11 00:56:16.425144 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-11 00:56:16.425148 | orchestrator | Wednesday 11 March 2026 00:46:10 +0000 (0:00:00.954) 0:00:32.753 ******* 2026-03-11 00:56:16.425152 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.425155 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.425159 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.425163 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.425173 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.425177 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.425180 | orchestrator | 2026-03-11 00:56:16.425184 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-11 00:56:16.425188 | orchestrator | Wednesday 11 March 2026 00:46:11 +0000 (0:00:00.675) 0:00:33.428 ******* 2026-03-11 00:56:16.425199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71564836--6f16--509c--9c2d--06150302b625-osd--block--71564836--6f16--509c--9c2d--06150302b625', 'dm-uuid-LVM-pyZ5rB0R0qmIWUxI5gCQVKaKF0hu4glj74GAuXfKv2MAaOoBo1mxVFBDd2JymnHg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--20faa7ec--42ec--56bc--96e8--0b7388032f08-osd--block--20faa7ec--42ec--56bc--96e8--0b7388032f08', 'dm-uuid-LVM-pXd1UaKkJmiNo8fAWwtODo0F9CzuBWMNam2cYCT1dcxyx2pRueNkuIYX2dwy7nwk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425258 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425265 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425269 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425273 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425281 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425288 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425295 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425299 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2fb06152--6c58--5f9b--bb14--a51d715c3982-osd--block--2fb06152--6c58--5f9b--bb14--a51d715c3982', 'dm-uuid-LVM-7Uuvgqh6NcBREtc01Xdtz3qAOv3zfovluPSUPEC7NhlzmhxC0Nc6POtStmfO1Wdw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425337 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.425348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--71564836--6f16--509c--9c2d--06150302b625-osd--block--71564836--6f16--509c--9c2d--06150302b625'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ivV1Pd-GQUU-0hyB-f198-psgw-Gkx3-f2lD49', 'scsi-0QEMU_QEMU_HARDDISK_093a0f58-cc4b-4485-9e6f-5c5128ebf642', 'scsi-SQEMU_QEMU_HARDDISK_093a0f58-cc4b-4485-9e6f-5c5128ebf642'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.425358 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2e0b0e2c--c482--530c--847f--054ffec93e8e-osd--block--2e0b0e2c--c482--530c--847f--054ffec93e8e', 'dm-uuid-LVM-AKpMPdveCGqZfTHNqUdOrwypZcJWcalbIZh1AdPadOXUp4IlZWBvWWFtgVHCFWIq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425362 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--20faa7ec--42ec--56bc--96e8--0b7388032f08-osd--block--20faa7ec--42ec--56bc--96e8--0b7388032f08'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fAR1X5-7HZS-e9KQ-Z8pC-qVVR-MPmq-1ajZSi', 'scsi-0QEMU_QEMU_HARDDISK_ae1c2658-52b8-455d-907b-e7170e3050e5', 'scsi-SQEMU_QEMU_HARDDISK_ae1c2658-52b8-455d-907b-e7170e3050e5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.425399 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ff314bd-8772-4cae-a8e3-239e2ae43cb3', 'scsi-SQEMU_QEMU_HARDDISK_8ff314bd-8772-4cae-a8e3-239e2ae43cb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.425404 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c12a1925--beca--5a04--a9cd--b492500b7146-osd--block--c12a1925--beca--5a04--a9cd--b492500b7146', 'dm-uuid-LVM-CWgETdHvS4Dy2AyHaaYd2xmULpdrXOiJcr9BFGM4S4KpW0eOZxQoG98LLDMBbi6M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425419 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.425431 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425437 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--75b18a9f--434b--5575--8ed7--e1e8868eceb5-osd--block--75b18a9f--434b--5575--8ed7--e1e8868eceb5', 'dm-uuid-LVM-17OUSIdr3HuYahsLwJHPMesEwkWU3kj0L7NymUjJrvhQFMjl04ZdJ0mGQS50dlGZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425447 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425492 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425501 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425506 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425512 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425519 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425531 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425541 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425548 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425553 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425559 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425595 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425601 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.425608 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425620 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425666 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2', 'scsi-SQEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.425678 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.425690 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.425768 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2fb06152--6c58--5f9b--bb14--a51d715c3982-osd--block--2fb06152--6c58--5f9b--bb14--a51d715c3982'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lY4cgz-KPol-Cy9h-jYPc-tiHv-Zjms-O98Zn3', 'scsi-0QEMU_QEMU_HARDDISK_eb5be362-3b33-4846-8138-86194f5d1a8a', 'scsi-SQEMU_QEMU_HARDDISK_eb5be362-3b33-4846-8138-86194f5d1a8a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.425780 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2e0b0e2c--c482--530c--847f--054ffec93e8e-osd--block--2e0b0e2c--c482--530c--847f--054ffec93e8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fMJKz6-77i5-37CY-TSkd-IvL9-nNqV-LEHCjI', 'scsi-0QEMU_QEMU_HARDDISK_f36f8e1d-14c5-427c-b242-d446b19c77db', 'scsi-SQEMU_QEMU_HARDDISK_f36f8e1d-14c5-427c-b242-d446b19c77db'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.425793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288642ce-5fa9-4bc7-a508-61d675ea6136', 'scsi-SQEMU_QEMU_HARDDISK_288642ce-5fa9-4bc7-a508-61d675ea6136'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.425811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.425817 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425823 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.425829 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.425834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425857 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425864 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425886 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425896 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425908 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425936 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425942 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425949 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.425965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e', 'scsi-SQEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.426371 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part1', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part14', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part15', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part16', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.426408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.426417 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.426424 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.426440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.426447 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:56:16.426525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891', 'scsi-SQEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.426544 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.426551 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.426558 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c12a1925--beca--5a04--a9cd--b492500b7146-osd--block--c12a1925--beca--5a04--a9cd--b492500b7146'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tuJMcM-uQnl-JSTs-WrnO-sWxn-3scz-VXnlPQ', 'scsi-0QEMU_QEMU_HARDDISK_7fe845d7-e58c-4b3d-846a-c114ba83f0c4', 'scsi-SQEMU_QEMU_HARDDISK_7fe845d7-e58c-4b3d-846a-c114ba83f0c4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.426570 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--75b18a9f--434b--5575--8ed7--e1e8868eceb5-osd--block--75b18a9f--434b--5575--8ed7--e1e8868eceb5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qz6mOZ-2wp1-3a0W-Qzeb-M25K-Xnxh-aHxL2P', 'scsi-0QEMU_QEMU_HARDDISK_b058385a-4b50-41f2-be6b-aeff7a6e6499', 'scsi-SQEMU_QEMU_HARDDISK_b058385a-4b50-41f2-be6b-aeff7a6e6499'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.426578 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc665229-5891-49fd-b2c5-1ba6ac78c628', 'scsi-SQEMU_QEMU_HARDDISK_fc665229-5891-49fd-b2c5-1ba6ac78c628'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.426813 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:56:16.426829 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.426841 | orchestrator | 2026-03-11 00:56:16.426846 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-11 00:56:16.426851 | orchestrator | Wednesday 11 March 2026 00:46:13 +0000 (0:00:01.483) 0:00:34.912 ******* 2026-03-11 00:56:16.426856 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2fb06152--6c58--5f9b--bb14--a51d715c3982-osd--block--2fb06152--6c58--5f9b--bb14--a51d715c3982', 'dm-uuid-LVM-7Uuvgqh6NcBREtc01Xdtz3qAOv3zfovluPSUPEC7NhlzmhxC0Nc6POtStmfO1Wdw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.426862 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2e0b0e2c--c482--530c--847f--054ffec93e8e-osd--block--2e0b0e2c--c482--530c--847f--054ffec93e8e', 'dm-uuid-LVM-AKpMPdveCGqZfTHNqUdOrwypZcJWcalbIZh1AdPadOXUp4IlZWBvWWFtgVHCFWIq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.426872 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.426878 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.426882 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.426926 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.426938 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.426943 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.426947 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.426955 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.426993 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427004 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2fb06152--6c58--5f9b--bb14--a51d715c3982-osd--block--2fb06152--6c58--5f9b--bb14--a51d715c3982'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lY4cgz-KPol-Cy9h-jYPc-tiHv-Zjms-O98Zn3', 'scsi-0QEMU_QEMU_HARDDISK_eb5be362-3b33-4846-8138-86194f5d1a8a', 'scsi-SQEMU_QEMU_HARDDISK_eb5be362-3b33-4846-8138-86194f5d1a8a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427013 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2e0b0e2c--c482--530c--847f--054ffec93e8e-osd--block--2e0b0e2c--c482--530c--847f--054ffec93e8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fMJKz6-77i5-37CY-TSkd-IvL9-nNqV-LEHCjI', 'scsi-0QEMU_QEMU_HARDDISK_f36f8e1d-14c5-427c-b242-d446b19c77db', 'scsi-SQEMU_QEMU_HARDDISK_f36f8e1d-14c5-427c-b242-d446b19c77db'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427018 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288642ce-5fa9-4bc7-a508-61d675ea6136', 'scsi-SQEMU_QEMU_HARDDISK_288642ce-5fa9-4bc7-a508-61d675ea6136'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427053 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427063 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c12a1925--beca--5a04--a9cd--b492500b7146-osd--block--c12a1925--beca--5a04--a9cd--b492500b7146', 'dm-uuid-LVM-CWgETdHvS4Dy2AyHaaYd2xmULpdrXOiJcr9BFGM4S4KpW0eOZxQoG98LLDMBbi6M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427068 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--75b18a9f--434b--5575--8ed7--e1e8868eceb5-osd--block--75b18a9f--434b--5575--8ed7--e1e8868eceb5', 'dm-uuid-LVM-17OUSIdr3HuYahsLwJHPMesEwkWU3kj0L7NymUjJrvhQFMjl04ZdJ0mGQS50dlGZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427087 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71564836--6f16--509c--9c2d--06150302b625-osd--block--71564836--6f16--509c--9c2d--06150302b625', 'dm-uuid-LVM-pyZ5rB0R0qmIWUxI5gCQVKaKF0hu4glj74GAuXfKv2MAaOoBo1mxVFBDd2JymnHg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427092 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427097 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427136 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--20faa7ec--42ec--56bc--96e8--0b7388032f08-osd--block--20faa7ec--42ec--56bc--96e8--0b7388032f08', 'dm-uuid-LVM-pXd1UaKkJmiNo8fAWwtODo0F9CzuBWMNam2cYCT1dcxyx2pRueNkuIYX2dwy7nwk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427142 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427147 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427155 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427160 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427164 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427211 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427219 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427226 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427232 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427242 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427353 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part1', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part14', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part15', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part16', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427397 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c12a1925--beca--5a04--a9cd--b492500b7146-osd--block--c12a1925--beca--5a04--a9cd--b492500b7146'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tuJMcM-uQnl-JSTs-WrnO-sWxn-3scz-VXnlPQ', 'scsi-0QEMU_QEMU_HARDDISK_7fe845d7-e58c-4b3d-846a-c114ba83f0c4', 'scsi-SQEMU_QEMU_HARDDISK_7fe845d7-e58c-4b3d-846a-c114ba83f0c4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427411 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--75b18a9f--434b--5575--8ed7--e1e8868eceb5-osd--block--75b18a9f--434b--5575--8ed7--e1e8868eceb5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qz6mOZ-2wp1-3a0W-Qzeb-M25K-Xnxh-aHxL2P', 'scsi-0QEMU_QEMU_HARDDISK_b058385a-4b50-41f2-be6b-aeff7a6e6499', 'scsi-SQEMU_QEMU_HARDDISK_b058385a-4b50-41f2-be6b-aeff7a6e6499'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427418 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc665229-5891-49fd-b2c5-1ba6ac78c628', 'scsi-SQEMU_QEMU_HARDDISK_fc665229-5891-49fd-b2c5-1ba6ac78c628'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427541 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427551 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.427556 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427561 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427566 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427577 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427581 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427591 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427750 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427763 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427768 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427779 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2', 'scsi-SQEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part1', 'scsi-SQEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part14', 'scsi-SQEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part15', 'scsi-SQEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part16', 'scsi-SQEMU_QEMU_HARDDISK_fbcd8f10-01e6-46d3-8161-dd0ec29d23f2-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427832 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-52-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427838 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.427843 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427848 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427852 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.427860 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427864 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427873 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427904 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427909 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427913 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427917 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427924 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427959 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427965 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427969 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427976 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.427984 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--71564836--6f16--509c--9c2d--06150302b625-osd--block--71564836--6f16--509c--9c2d--06150302b625'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ivV1Pd-GQUU-0hyB-f198-psgw-Gkx3-f2lD49', 'scsi-0QEMU_QEMU_HARDDISK_093a0f58-cc4b-4485-9e6f-5c5128ebf642', 'scsi-SQEMU_QEMU_HARDDISK_093a0f58-cc4b-4485-9e6f-5c5128ebf642'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428019 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428025 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--20faa7ec--42ec--56bc--96e8--0b7388032f08-osd--block--20faa7ec--42ec--56bc--96e8--0b7388032f08'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fAR1X5-7HZS-e9KQ-Z8pC-qVVR-MPmq-1ajZSi', 'scsi-0QEMU_QEMU_HARDDISK_ae1c2658-52b8-455d-907b-e7170e3050e5', 'scsi-SQEMU_QEMU_HARDDISK_ae1c2658-52b8-455d-907b-e7170e3050e5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428029 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428036 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428043 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428047 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ff314bd-8772-4cae-a8e3-239e2ae43cb3', 'scsi-SQEMU_QEMU_HARDDISK_8ff314bd-8772-4cae-a8e3-239e2ae43cb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428072 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891', 'scsi-SQEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part1', 'scsi-SQEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part14', 'scsi-SQEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part15', 'scsi-SQEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part16', 'scsi-SQEMU_QEMU_HARDDISK_0c7e2588-12fd-42af-aa14-3920652e8891-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428109 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428114 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428138 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428143 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-50-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428147 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.428155 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e', 'scsi-SQEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part1', 'scsi-SQEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part14', 'scsi-SQEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part15', 'scsi-SQEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part16', 'scsi-SQEMU_QEMU_HARDDISK_dc47894e-e8a2-41fd-b2d5-937966a93d0e-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428163 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.428167 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-54-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:56:16.428171 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.428175 | orchestrator | 2026-03-11 00:56:16.428199 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-11 00:56:16.428204 | orchestrator | Wednesday 11 March 2026 00:46:14 +0000 (0:00:01.612) 0:00:36.524 ******* 2026-03-11 00:56:16.428208 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.428212 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.428215 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.428219 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.428223 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.428227 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.428230 | orchestrator | 2026-03-11 00:56:16.428234 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-11 00:56:16.428238 | orchestrator | Wednesday 11 March 2026 00:46:16 +0000 (0:00:01.472) 0:00:37.997 ******* 2026-03-11 00:56:16.428242 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.428245 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.428249 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.428253 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.428257 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.428260 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.428264 | orchestrator | 2026-03-11 00:56:16.428268 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-11 00:56:16.428272 | orchestrator | Wednesday 11 March 2026 00:46:17 +0000 (0:00:00.857) 0:00:38.855 ******* 2026-03-11 00:56:16.428276 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.428279 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.428283 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.428287 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.428291 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.428294 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.428298 | orchestrator | 2026-03-11 00:56:16.428305 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-11 00:56:16.428309 | orchestrator | Wednesday 11 March 2026 00:46:18 +0000 (0:00:01.071) 0:00:39.927 ******* 2026-03-11 00:56:16.428313 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.428317 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.428320 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.428324 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.428328 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.428332 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.428335 | orchestrator | 2026-03-11 00:56:16.428339 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-11 00:56:16.428343 | orchestrator | Wednesday 11 March 2026 00:46:18 +0000 (0:00:00.739) 0:00:40.666 ******* 2026-03-11 00:56:16.428346 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.428350 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.428354 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.428358 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.428361 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.428365 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.428369 | orchestrator | 2026-03-11 00:56:16.428372 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-11 00:56:16.428376 | orchestrator | Wednesday 11 March 2026 00:46:21 +0000 (0:00:02.180) 0:00:42.847 ******* 2026-03-11 00:56:16.428380 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.428384 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.428387 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.428391 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.428395 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.428399 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.428402 | orchestrator | 2026-03-11 00:56:16.428406 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-11 00:56:16.428410 | orchestrator | Wednesday 11 March 2026 00:46:22 +0000 (0:00:01.070) 0:00:43.918 ******* 2026-03-11 00:56:16.428417 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-11 00:56:16.428421 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-11 00:56:16.428425 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-11 00:56:16.428428 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-11 00:56:16.428432 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-11 00:56:16.428436 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-11 00:56:16.428440 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-11 00:56:16.428443 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-11 00:56:16.428447 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2026-03-11 00:56:16.428451 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-11 00:56:16.428454 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2026-03-11 00:56:16.428458 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-11 00:56:16.428462 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2026-03-11 00:56:16.428465 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-11 00:56:16.428469 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2026-03-11 00:56:16.428473 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2026-03-11 00:56:16.428477 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-11 00:56:16.428480 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2026-03-11 00:56:16.428484 | orchestrator | 2026-03-11 00:56:16.428488 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-11 00:56:16.428492 | orchestrator | Wednesday 11 March 2026 00:46:25 +0000 (0:00:03.282) 0:00:47.200 ******* 2026-03-11 00:56:16.428496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-11 00:56:16.428499 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-11 00:56:16.428508 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-11 00:56:16.428512 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.428515 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-11 00:56:16.428519 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-11 00:56:16.428523 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-11 00:56:16.428527 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.428531 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-11 00:56:16.428547 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-11 00:56:16.428552 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-11 00:56:16.428556 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.428559 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-11 00:56:16.428563 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-11 00:56:16.428567 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-11 00:56:16.428572 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.428577 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2026-03-11 00:56:16.428583 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2026-03-11 00:56:16.428589 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2026-03-11 00:56:16.428594 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2026-03-11 00:56:16.428604 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.428610 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2026-03-11 00:56:16.428617 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2026-03-11 00:56:16.428622 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.428628 | orchestrator | 2026-03-11 00:56:16.428634 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-11 00:56:16.428639 | orchestrator | Wednesday 11 March 2026 00:46:26 +0000 (0:00:00.725) 0:00:47.926 ******* 2026-03-11 00:56:16.428645 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.428650 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.428655 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.428661 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.428667 | orchestrator | 2026-03-11 00:56:16.428673 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-11 00:56:16.428680 | orchestrator | Wednesday 11 March 2026 00:46:27 +0000 (0:00:01.542) 0:00:49.468 ******* 2026-03-11 00:56:16.428685 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.428691 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.428697 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.428703 | orchestrator | 2026-03-11 00:56:16.428747 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-11 00:56:16.428756 | orchestrator | Wednesday 11 March 2026 00:46:28 +0000 (0:00:00.426) 0:00:49.895 ******* 2026-03-11 00:56:16.428762 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.428768 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.428774 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.428780 | orchestrator | 2026-03-11 00:56:16.428785 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-11 00:56:16.428792 | orchestrator | Wednesday 11 March 2026 00:46:28 +0000 (0:00:00.445) 0:00:50.340 ******* 2026-03-11 00:56:16.428797 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.428803 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.428809 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.428815 | orchestrator | 2026-03-11 00:56:16.428822 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-11 00:56:16.428835 | orchestrator | Wednesday 11 March 2026 00:46:29 +0000 (0:00:00.641) 0:00:50.981 ******* 2026-03-11 00:56:16.428841 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.428848 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.428859 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.428866 | orchestrator | 2026-03-11 00:56:16.428872 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-11 00:56:16.428878 | orchestrator | Wednesday 11 March 2026 00:46:30 +0000 (0:00:00.996) 0:00:51.978 ******* 2026-03-11 00:56:16.428884 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.428890 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.428896 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.428902 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.428908 | orchestrator | 2026-03-11 00:56:16.428914 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-11 00:56:16.428920 | orchestrator | Wednesday 11 March 2026 00:46:30 +0000 (0:00:00.681) 0:00:52.660 ******* 2026-03-11 00:56:16.428926 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.428933 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.428939 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.428945 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.428951 | orchestrator | 2026-03-11 00:56:16.428957 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-11 00:56:16.428964 | orchestrator | Wednesday 11 March 2026 00:46:31 +0000 (0:00:00.346) 0:00:53.006 ******* 2026-03-11 00:56:16.428970 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.428975 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.428982 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.428988 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.428994 | orchestrator | 2026-03-11 00:56:16.429001 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-11 00:56:16.429007 | orchestrator | Wednesday 11 March 2026 00:46:31 +0000 (0:00:00.544) 0:00:53.551 ******* 2026-03-11 00:56:16.429013 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.429019 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.429024 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.429031 | orchestrator | 2026-03-11 00:56:16.429038 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-11 00:56:16.429043 | orchestrator | Wednesday 11 March 2026 00:46:32 +0000 (0:00:00.329) 0:00:53.880 ******* 2026-03-11 00:56:16.429049 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-11 00:56:16.429055 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-11 00:56:16.429088 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-11 00:56:16.429094 | orchestrator | 2026-03-11 00:56:16.429100 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-11 00:56:16.429106 | orchestrator | Wednesday 11 March 2026 00:46:33 +0000 (0:00:01.279) 0:00:55.160 ******* 2026-03-11 00:56:16.429112 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:56:16.429118 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:56:16.429124 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:56:16.429130 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-11 00:56:16.429136 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-11 00:56:16.429142 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-11 00:56:16.429148 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-11 00:56:16.429154 | orchestrator | 2026-03-11 00:56:16.429160 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-11 00:56:16.429171 | orchestrator | Wednesday 11 March 2026 00:46:34 +0000 (0:00:00.894) 0:00:56.054 ******* 2026-03-11 00:56:16.429177 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:56:16.429183 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:56:16.429189 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:56:16.429195 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-11 00:56:16.429200 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-11 00:56:16.429206 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-11 00:56:16.429212 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-11 00:56:16.429218 | orchestrator | 2026-03-11 00:56:16.429224 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:16.429230 | orchestrator | Wednesday 11 March 2026 00:46:35 +0000 (0:00:01.749) 0:00:57.803 ******* 2026-03-11 00:56:16.429237 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.429244 | orchestrator | 2026-03-11 00:56:16.429250 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:16.429255 | orchestrator | Wednesday 11 March 2026 00:46:37 +0000 (0:00:01.027) 0:00:58.831 ******* 2026-03-11 00:56:16.429261 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.429267 | orchestrator | 2026-03-11 00:56:16.429272 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:16.429282 | orchestrator | Wednesday 11 March 2026 00:46:37 +0000 (0:00:00.964) 0:00:59.796 ******* 2026-03-11 00:56:16.429287 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.429293 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.429299 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.429305 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.429311 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.429316 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.429322 | orchestrator | 2026-03-11 00:56:16.429328 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:16.429334 | orchestrator | Wednesday 11 March 2026 00:46:39 +0000 (0:00:01.172) 0:01:00.968 ******* 2026-03-11 00:56:16.429339 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.429345 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.429351 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.429357 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.429363 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.429368 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.429374 | orchestrator | 2026-03-11 00:56:16.429380 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:16.429385 | orchestrator | Wednesday 11 March 2026 00:46:39 +0000 (0:00:00.825) 0:01:01.793 ******* 2026-03-11 00:56:16.429391 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.429397 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.429403 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.429408 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.429414 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.429420 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.429426 | orchestrator | 2026-03-11 00:56:16.429431 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:16.429437 | orchestrator | Wednesday 11 March 2026 00:46:40 +0000 (0:00:00.689) 0:01:02.483 ******* 2026-03-11 00:56:16.429443 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.429454 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.429460 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.429466 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.429472 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.429477 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.429483 | orchestrator | 2026-03-11 00:56:16.429488 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:16.429526 | orchestrator | Wednesday 11 March 2026 00:46:41 +0000 (0:00:00.763) 0:01:03.247 ******* 2026-03-11 00:56:16.429533 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.429539 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.429546 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.429553 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.429559 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.429585 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.429591 | orchestrator | 2026-03-11 00:56:16.429594 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:16.429598 | orchestrator | Wednesday 11 March 2026 00:46:42 +0000 (0:00:01.241) 0:01:04.488 ******* 2026-03-11 00:56:16.429602 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.429606 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.429609 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.429613 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.429617 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.429621 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.429624 | orchestrator | 2026-03-11 00:56:16.429628 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:16.429632 | orchestrator | Wednesday 11 March 2026 00:46:43 +0000 (0:00:00.749) 0:01:05.238 ******* 2026-03-11 00:56:16.429635 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.429639 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.429643 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.429647 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.429650 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.429654 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.429658 | orchestrator | 2026-03-11 00:56:16.429662 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:16.429665 | orchestrator | Wednesday 11 March 2026 00:46:44 +0000 (0:00:00.798) 0:01:06.037 ******* 2026-03-11 00:56:16.429669 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.429673 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.429676 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.429680 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.429684 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.429687 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.429691 | orchestrator | 2026-03-11 00:56:16.429695 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:16.429699 | orchestrator | Wednesday 11 March 2026 00:46:46 +0000 (0:00:01.877) 0:01:07.915 ******* 2026-03-11 00:56:16.429702 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.429706 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.429725 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.429730 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.429733 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.429737 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.429740 | orchestrator | 2026-03-11 00:56:16.429744 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:16.429748 | orchestrator | Wednesday 11 March 2026 00:46:47 +0000 (0:00:01.372) 0:01:09.287 ******* 2026-03-11 00:56:16.429752 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.429755 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.429759 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.429762 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.429766 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.429775 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.429779 | orchestrator | 2026-03-11 00:56:16.429783 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:16.429787 | orchestrator | Wednesday 11 March 2026 00:46:48 +0000 (0:00:00.572) 0:01:09.859 ******* 2026-03-11 00:56:16.429790 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.429794 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.429799 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.429805 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.429812 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.429822 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.429828 | orchestrator | 2026-03-11 00:56:16.429834 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:16.429844 | orchestrator | Wednesday 11 March 2026 00:46:49 +0000 (0:00:01.050) 0:01:10.910 ******* 2026-03-11 00:56:16.429851 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.429857 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.429863 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.429869 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.429875 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.429882 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.429885 | orchestrator | 2026-03-11 00:56:16.429889 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:16.429893 | orchestrator | Wednesday 11 March 2026 00:46:49 +0000 (0:00:00.824) 0:01:11.734 ******* 2026-03-11 00:56:16.429897 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.429900 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.429904 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.429908 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.429911 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.429915 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.429919 | orchestrator | 2026-03-11 00:56:16.429922 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:16.429926 | orchestrator | Wednesday 11 March 2026 00:46:50 +0000 (0:00:00.885) 0:01:12.619 ******* 2026-03-11 00:56:16.429930 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.429934 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.429937 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.429941 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.429945 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.429948 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.429952 | orchestrator | 2026-03-11 00:56:16.429956 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:16.429959 | orchestrator | Wednesday 11 March 2026 00:46:51 +0000 (0:00:00.764) 0:01:13.384 ******* 2026-03-11 00:56:16.429963 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.429967 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.429970 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.429975 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.429980 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.429986 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.429992 | orchestrator | 2026-03-11 00:56:16.429997 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:16.430003 | orchestrator | Wednesday 11 March 2026 00:46:52 +0000 (0:00:01.116) 0:01:14.500 ******* 2026-03-11 00:56:16.430009 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.430059 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.430065 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.430071 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.430108 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.430115 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.430121 | orchestrator | 2026-03-11 00:56:16.430127 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:16.430134 | orchestrator | Wednesday 11 March 2026 00:46:53 +0000 (0:00:00.544) 0:01:15.045 ******* 2026-03-11 00:56:16.430144 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.430148 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.430152 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.430156 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.430160 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.430163 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.430167 | orchestrator | 2026-03-11 00:56:16.430171 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:16.430175 | orchestrator | Wednesday 11 March 2026 00:46:53 +0000 (0:00:00.700) 0:01:15.745 ******* 2026-03-11 00:56:16.430181 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.430187 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.430193 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.430198 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.430204 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.430210 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.430217 | orchestrator | 2026-03-11 00:56:16.430223 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:16.430230 | orchestrator | Wednesday 11 March 2026 00:46:54 +0000 (0:00:00.530) 0:01:16.276 ******* 2026-03-11 00:56:16.430236 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.430241 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.430248 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.430253 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.430257 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.430260 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.430264 | orchestrator | 2026-03-11 00:56:16.430268 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2026-03-11 00:56:16.430272 | orchestrator | Wednesday 11 March 2026 00:46:55 +0000 (0:00:01.288) 0:01:17.565 ******* 2026-03-11 00:56:16.430275 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.430279 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.430283 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.430287 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.430290 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.430294 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.430298 | orchestrator | 2026-03-11 00:56:16.430305 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2026-03-11 00:56:16.430311 | orchestrator | Wednesday 11 March 2026 00:46:57 +0000 (0:00:01.870) 0:01:19.435 ******* 2026-03-11 00:56:16.430317 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.430323 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.430328 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.430334 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.430340 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.430346 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.430353 | orchestrator | 2026-03-11 00:56:16.430359 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2026-03-11 00:56:16.430365 | orchestrator | Wednesday 11 March 2026 00:47:00 +0000 (0:00:02.804) 0:01:22.239 ******* 2026-03-11 00:56:16.430372 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.430379 | orchestrator | 2026-03-11 00:56:16.430386 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2026-03-11 00:56:16.430400 | orchestrator | Wednesday 11 March 2026 00:47:01 +0000 (0:00:01.163) 0:01:23.403 ******* 2026-03-11 00:56:16.430404 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.430407 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.430411 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.430415 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.430418 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.430422 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.430431 | orchestrator | 2026-03-11 00:56:16.430434 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2026-03-11 00:56:16.430438 | orchestrator | Wednesday 11 March 2026 00:47:02 +0000 (0:00:00.663) 0:01:24.067 ******* 2026-03-11 00:56:16.430442 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.430445 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.430449 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.430453 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.430456 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.430460 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.430464 | orchestrator | 2026-03-11 00:56:16.430467 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2026-03-11 00:56:16.430471 | orchestrator | Wednesday 11 March 2026 00:47:02 +0000 (0:00:00.659) 0:01:24.726 ******* 2026-03-11 00:56:16.430475 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-11 00:56:16.430479 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-11 00:56:16.430482 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-11 00:56:16.430486 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-11 00:56:16.430490 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-11 00:56:16.430494 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2026-03-11 00:56:16.430497 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-11 00:56:16.430501 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-11 00:56:16.430505 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-11 00:56:16.430509 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-11 00:56:16.430534 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-11 00:56:16.430538 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2026-03-11 00:56:16.430542 | orchestrator | 2026-03-11 00:56:16.430545 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2026-03-11 00:56:16.430549 | orchestrator | Wednesday 11 March 2026 00:47:04 +0000 (0:00:01.236) 0:01:25.963 ******* 2026-03-11 00:56:16.430553 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.430557 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.430560 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.430564 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.430568 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.430571 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.430575 | orchestrator | 2026-03-11 00:56:16.430579 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2026-03-11 00:56:16.430582 | orchestrator | Wednesday 11 March 2026 00:47:05 +0000 (0:00:01.069) 0:01:27.032 ******* 2026-03-11 00:56:16.430586 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.430590 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.430594 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.430597 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.430601 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.430605 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.430608 | orchestrator | 2026-03-11 00:56:16.430612 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2026-03-11 00:56:16.430616 | orchestrator | Wednesday 11 March 2026 00:47:05 +0000 (0:00:00.648) 0:01:27.681 ******* 2026-03-11 00:56:16.430620 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.430623 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.430627 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.430631 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.430638 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.430642 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.430648 | orchestrator | 2026-03-11 00:56:16.430654 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2026-03-11 00:56:16.430659 | orchestrator | Wednesday 11 March 2026 00:47:06 +0000 (0:00:00.657) 0:01:28.338 ******* 2026-03-11 00:56:16.430665 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.430670 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.430676 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.430683 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.430688 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.430695 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.430701 | orchestrator | 2026-03-11 00:56:16.430706 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2026-03-11 00:56:16.430752 | orchestrator | Wednesday 11 March 2026 00:47:07 +0000 (0:00:00.478) 0:01:28.817 ******* 2026-03-11 00:56:16.430759 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.430766 | orchestrator | 2026-03-11 00:56:16.430772 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2026-03-11 00:56:16.430778 | orchestrator | Wednesday 11 March 2026 00:47:08 +0000 (0:00:00.999) 0:01:29.816 ******* 2026-03-11 00:56:16.430784 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.430790 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.430795 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.430801 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.430812 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.430818 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.430824 | orchestrator | 2026-03-11 00:56:16.430829 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2026-03-11 00:56:16.430836 | orchestrator | Wednesday 11 March 2026 00:48:08 +0000 (0:01:00.418) 0:02:30.235 ******* 2026-03-11 00:56:16.430842 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-11 00:56:16.430847 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-11 00:56:16.430853 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-11 00:56:16.430859 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.430865 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-11 00:56:16.430871 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-11 00:56:16.430877 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-11 00:56:16.430883 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.430890 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-11 00:56:16.430896 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-11 00:56:16.430899 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-11 00:56:16.430903 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.430907 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-11 00:56:16.430911 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-11 00:56:16.430914 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-11 00:56:16.430918 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.430922 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-11 00:56:16.430925 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-11 00:56:16.430929 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-11 00:56:16.430938 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.430973 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2026-03-11 00:56:16.430979 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2026-03-11 00:56:16.430985 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2026-03-11 00:56:16.430990 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.430996 | orchestrator | 2026-03-11 00:56:16.431003 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2026-03-11 00:56:16.431009 | orchestrator | Wednesday 11 March 2026 00:48:09 +0000 (0:00:00.634) 0:02:30.869 ******* 2026-03-11 00:56:16.431015 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.431021 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.431027 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.431033 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.431040 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.431046 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.431052 | orchestrator | 2026-03-11 00:56:16.431059 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2026-03-11 00:56:16.431064 | orchestrator | Wednesday 11 March 2026 00:48:09 +0000 (0:00:00.779) 0:02:31.649 ******* 2026-03-11 00:56:16.431067 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.431071 | orchestrator | 2026-03-11 00:56:16.431075 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2026-03-11 00:56:16.431078 | orchestrator | Wednesday 11 March 2026 00:48:10 +0000 (0:00:00.160) 0:02:31.810 ******* 2026-03-11 00:56:16.431082 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.431086 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.431089 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.431093 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.431097 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.431100 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.431104 | orchestrator | 2026-03-11 00:56:16.431108 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2026-03-11 00:56:16.431111 | orchestrator | Wednesday 11 March 2026 00:48:10 +0000 (0:00:00.636) 0:02:32.447 ******* 2026-03-11 00:56:16.431115 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.431118 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.431122 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.431126 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.431130 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.431133 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.431137 | orchestrator | 2026-03-11 00:56:16.431141 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2026-03-11 00:56:16.431144 | orchestrator | Wednesday 11 March 2026 00:48:11 +0000 (0:00:01.042) 0:02:33.490 ******* 2026-03-11 00:56:16.431148 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.431152 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.431155 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.431159 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.431163 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.431166 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.431170 | orchestrator | 2026-03-11 00:56:16.431174 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2026-03-11 00:56:16.431177 | orchestrator | Wednesday 11 March 2026 00:48:12 +0000 (0:00:00.723) 0:02:34.213 ******* 2026-03-11 00:56:16.431181 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.431185 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.431188 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.431192 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.431196 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.431199 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.431203 | orchestrator | 2026-03-11 00:56:16.431211 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2026-03-11 00:56:16.431220 | orchestrator | Wednesday 11 March 2026 00:48:15 +0000 (0:00:02.781) 0:02:36.994 ******* 2026-03-11 00:56:16.431223 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.431227 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.431231 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.431234 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.431238 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.431242 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.431246 | orchestrator | 2026-03-11 00:56:16.431249 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2026-03-11 00:56:16.431253 | orchestrator | Wednesday 11 March 2026 00:48:15 +0000 (0:00:00.683) 0:02:37.678 ******* 2026-03-11 00:56:16.431258 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.431263 | orchestrator | 2026-03-11 00:56:16.431266 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2026-03-11 00:56:16.431270 | orchestrator | Wednesday 11 March 2026 00:48:16 +0000 (0:00:01.029) 0:02:38.708 ******* 2026-03-11 00:56:16.431274 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.431277 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.431281 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.431285 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.431289 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.431292 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.431296 | orchestrator | 2026-03-11 00:56:16.431300 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2026-03-11 00:56:16.431303 | orchestrator | Wednesday 11 March 2026 00:48:17 +0000 (0:00:00.829) 0:02:39.537 ******* 2026-03-11 00:56:16.431307 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.431311 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.431314 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.431318 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.431322 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.431325 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.431329 | orchestrator | 2026-03-11 00:56:16.431333 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2026-03-11 00:56:16.431336 | orchestrator | Wednesday 11 March 2026 00:48:18 +0000 (0:00:00.657) 0:02:40.195 ******* 2026-03-11 00:56:16.431340 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.431344 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.431367 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.431371 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.431375 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.431379 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.431382 | orchestrator | 2026-03-11 00:56:16.431386 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2026-03-11 00:56:16.431390 | orchestrator | Wednesday 11 March 2026 00:48:19 +0000 (0:00:00.790) 0:02:40.985 ******* 2026-03-11 00:56:16.431393 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.431397 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.431401 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.431404 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.431408 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.431412 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.431418 | orchestrator | 2026-03-11 00:56:16.431424 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2026-03-11 00:56:16.431429 | orchestrator | Wednesday 11 March 2026 00:48:19 +0000 (0:00:00.636) 0:02:41.622 ******* 2026-03-11 00:56:16.431439 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.431448 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.431453 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.431460 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.431471 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.431477 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.431483 | orchestrator | 2026-03-11 00:56:16.431490 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2026-03-11 00:56:16.431495 | orchestrator | Wednesday 11 March 2026 00:48:20 +0000 (0:00:00.702) 0:02:42.324 ******* 2026-03-11 00:56:16.431501 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.431507 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.431513 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.431518 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.431523 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.431529 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.431534 | orchestrator | 2026-03-11 00:56:16.431540 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2026-03-11 00:56:16.431546 | orchestrator | Wednesday 11 March 2026 00:48:21 +0000 (0:00:00.705) 0:02:43.030 ******* 2026-03-11 00:56:16.431552 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.431557 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.431562 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.431568 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.431573 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.431578 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.431584 | orchestrator | 2026-03-11 00:56:16.431589 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2026-03-11 00:56:16.431595 | orchestrator | Wednesday 11 March 2026 00:48:22 +0000 (0:00:00.864) 0:02:43.894 ******* 2026-03-11 00:56:16.431601 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.431606 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.431612 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.431617 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.431623 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.431629 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.431634 | orchestrator | 2026-03-11 00:56:16.431640 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2026-03-11 00:56:16.431670 | orchestrator | Wednesday 11 March 2026 00:48:22 +0000 (0:00:00.637) 0:02:44.532 ******* 2026-03-11 00:56:16.431676 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.431682 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.431688 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.431693 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.431705 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.431735 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.431741 | orchestrator | 2026-03-11 00:56:16.431747 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2026-03-11 00:56:16.431752 | orchestrator | Wednesday 11 March 2026 00:48:23 +0000 (0:00:01.237) 0:02:45.770 ******* 2026-03-11 00:56:16.431759 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.431767 | orchestrator | 2026-03-11 00:56:16.431773 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2026-03-11 00:56:16.431779 | orchestrator | Wednesday 11 March 2026 00:48:25 +0000 (0:00:01.145) 0:02:46.916 ******* 2026-03-11 00:56:16.431785 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2026-03-11 00:56:16.431792 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2026-03-11 00:56:16.431798 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2026-03-11 00:56:16.431803 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2026-03-11 00:56:16.431809 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2026-03-11 00:56:16.431815 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2026-03-11 00:56:16.431821 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2026-03-11 00:56:16.431827 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2026-03-11 00:56:16.431837 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2026-03-11 00:56:16.431841 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2026-03-11 00:56:16.431844 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2026-03-11 00:56:16.431848 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2026-03-11 00:56:16.431852 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2026-03-11 00:56:16.431856 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2026-03-11 00:56:16.431859 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2026-03-11 00:56:16.431863 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2026-03-11 00:56:16.431867 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2026-03-11 00:56:16.431871 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2026-03-11 00:56:16.431905 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2026-03-11 00:56:16.431909 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2026-03-11 00:56:16.431913 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2026-03-11 00:56:16.431917 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2026-03-11 00:56:16.431920 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2026-03-11 00:56:16.431924 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2026-03-11 00:56:16.431928 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2026-03-11 00:56:16.431931 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2026-03-11 00:56:16.431935 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2026-03-11 00:56:16.431939 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2026-03-11 00:56:16.431942 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2026-03-11 00:56:16.431946 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2026-03-11 00:56:16.431950 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2026-03-11 00:56:16.431953 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2026-03-11 00:56:16.431957 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2026-03-11 00:56:16.431960 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2026-03-11 00:56:16.431964 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2026-03-11 00:56:16.431968 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2026-03-11 00:56:16.431971 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2026-03-11 00:56:16.431975 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2026-03-11 00:56:16.431978 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2026-03-11 00:56:16.431982 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2026-03-11 00:56:16.431986 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2026-03-11 00:56:16.431990 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2026-03-11 00:56:16.431993 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2026-03-11 00:56:16.431997 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2026-03-11 00:56:16.432001 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-11 00:56:16.432005 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2026-03-11 00:56:16.432009 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-11 00:56:16.432012 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2026-03-11 00:56:16.432016 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-11 00:56:16.432020 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-11 00:56:16.432023 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2026-03-11 00:56:16.432031 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-11 00:56:16.432035 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-11 00:56:16.432039 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-11 00:56:16.432046 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2026-03-11 00:56:16.432050 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-11 00:56:16.432053 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-11 00:56:16.432057 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-11 00:56:16.432061 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-11 00:56:16.432064 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-11 00:56:16.432068 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2026-03-11 00:56:16.432072 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-11 00:56:16.432076 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-11 00:56:16.432079 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-11 00:56:16.432083 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-11 00:56:16.432087 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-11 00:56:16.432090 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2026-03-11 00:56:16.432094 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-11 00:56:16.432098 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-11 00:56:16.432101 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-11 00:56:16.432105 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-11 00:56:16.432109 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-11 00:56:16.432112 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-11 00:56:16.432116 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2026-03-11 00:56:16.432120 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-11 00:56:16.432123 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-11 00:56:16.432140 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-11 00:56:16.432144 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-11 00:56:16.432147 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2026-03-11 00:56:16.432151 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-11 00:56:16.432155 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2026-03-11 00:56:16.432158 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2026-03-11 00:56:16.432162 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-11 00:56:16.432166 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2026-03-11 00:56:16.432169 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2026-03-11 00:56:16.432173 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2026-03-11 00:56:16.432177 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2026-03-11 00:56:16.432180 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2026-03-11 00:56:16.432184 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-11 00:56:16.432188 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2026-03-11 00:56:16.432191 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2026-03-11 00:56:16.432201 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2026-03-11 00:56:16.432205 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2026-03-11 00:56:16.432208 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2026-03-11 00:56:16.432212 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2026-03-11 00:56:16.432216 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2026-03-11 00:56:16.432219 | orchestrator | 2026-03-11 00:56:16.432223 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2026-03-11 00:56:16.432227 | orchestrator | Wednesday 11 March 2026 00:48:31 +0000 (0:00:06.467) 0:02:53.383 ******* 2026-03-11 00:56:16.432230 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432234 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432238 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432243 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.432247 | orchestrator | 2026-03-11 00:56:16.432251 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2026-03-11 00:56:16.432254 | orchestrator | Wednesday 11 March 2026 00:48:32 +0000 (0:00:00.893) 0:02:54.277 ******* 2026-03-11 00:56:16.432258 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.432262 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.432266 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.432269 | orchestrator | 2026-03-11 00:56:16.432273 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2026-03-11 00:56:16.432280 | orchestrator | Wednesday 11 March 2026 00:48:33 +0000 (0:00:00.903) 0:02:55.180 ******* 2026-03-11 00:56:16.432284 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.432287 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.432291 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.432295 | orchestrator | 2026-03-11 00:56:16.432298 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2026-03-11 00:56:16.432302 | orchestrator | Wednesday 11 March 2026 00:48:34 +0000 (0:00:01.328) 0:02:56.508 ******* 2026-03-11 00:56:16.432306 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.432309 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.432313 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.432317 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432320 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432324 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432328 | orchestrator | 2026-03-11 00:56:16.432331 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2026-03-11 00:56:16.432335 | orchestrator | Wednesday 11 March 2026 00:48:35 +0000 (0:00:00.624) 0:02:57.133 ******* 2026-03-11 00:56:16.432339 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.432342 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.432346 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.432350 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432354 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432357 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432361 | orchestrator | 2026-03-11 00:56:16.432365 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2026-03-11 00:56:16.432368 | orchestrator | Wednesday 11 March 2026 00:48:36 +0000 (0:00:00.903) 0:02:58.037 ******* 2026-03-11 00:56:16.432375 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.432379 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.432382 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.432386 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432390 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432393 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432397 | orchestrator | 2026-03-11 00:56:16.432413 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2026-03-11 00:56:16.432417 | orchestrator | Wednesday 11 March 2026 00:48:36 +0000 (0:00:00.643) 0:02:58.680 ******* 2026-03-11 00:56:16.432421 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.432425 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.432428 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.432432 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432436 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432439 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432443 | orchestrator | 2026-03-11 00:56:16.432447 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2026-03-11 00:56:16.432450 | orchestrator | Wednesday 11 March 2026 00:48:37 +0000 (0:00:00.846) 0:02:59.527 ******* 2026-03-11 00:56:16.432454 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.432458 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.432461 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.432465 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432469 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432472 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432476 | orchestrator | 2026-03-11 00:56:16.432480 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2026-03-11 00:56:16.432484 | orchestrator | Wednesday 11 March 2026 00:48:38 +0000 (0:00:00.610) 0:03:00.138 ******* 2026-03-11 00:56:16.432487 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.432491 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.432495 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.432498 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432502 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432506 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432509 | orchestrator | 2026-03-11 00:56:16.432513 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2026-03-11 00:56:16.432517 | orchestrator | Wednesday 11 March 2026 00:48:39 +0000 (0:00:00.814) 0:03:00.952 ******* 2026-03-11 00:56:16.432520 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.432524 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.432528 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.432531 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432535 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432539 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432542 | orchestrator | 2026-03-11 00:56:16.432546 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2026-03-11 00:56:16.432550 | orchestrator | Wednesday 11 March 2026 00:48:39 +0000 (0:00:00.631) 0:03:01.583 ******* 2026-03-11 00:56:16.432553 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.432557 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.432561 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.432564 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432568 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432572 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432576 | orchestrator | 2026-03-11 00:56:16.432579 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2026-03-11 00:56:16.432583 | orchestrator | Wednesday 11 March 2026 00:48:41 +0000 (0:00:01.555) 0:03:03.138 ******* 2026-03-11 00:56:16.432587 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432598 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432601 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432605 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.432609 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.432612 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.432616 | orchestrator | 2026-03-11 00:56:16.432622 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2026-03-11 00:56:16.432626 | orchestrator | Wednesday 11 March 2026 00:48:44 +0000 (0:00:03.616) 0:03:06.755 ******* 2026-03-11 00:56:16.432630 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.432634 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.432637 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.432641 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432645 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432648 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432652 | orchestrator | 2026-03-11 00:56:16.432656 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2026-03-11 00:56:16.432659 | orchestrator | Wednesday 11 March 2026 00:48:45 +0000 (0:00:00.821) 0:03:07.576 ******* 2026-03-11 00:56:16.432663 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.432667 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.432670 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.432674 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432678 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432681 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432685 | orchestrator | 2026-03-11 00:56:16.432689 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2026-03-11 00:56:16.432693 | orchestrator | Wednesday 11 March 2026 00:48:46 +0000 (0:00:00.591) 0:03:08.168 ******* 2026-03-11 00:56:16.432696 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.432700 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.432704 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.432707 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432724 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432728 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432732 | orchestrator | 2026-03-11 00:56:16.432736 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2026-03-11 00:56:16.432739 | orchestrator | Wednesday 11 March 2026 00:48:47 +0000 (0:00:00.710) 0:03:08.879 ******* 2026-03-11 00:56:16.432743 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.432747 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.432751 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.432755 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432772 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432776 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432780 | orchestrator | 2026-03-11 00:56:16.432784 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2026-03-11 00:56:16.432787 | orchestrator | Wednesday 11 March 2026 00:48:47 +0000 (0:00:00.528) 0:03:09.407 ******* 2026-03-11 00:56:16.432796 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2026-03-11 00:56:16.432805 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2026-03-11 00:56:16.432817 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.432828 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2026-03-11 00:56:16.432835 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2026-03-11 00:56:16.432840 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.432846 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2026-03-11 00:56:16.432853 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2026-03-11 00:56:16.432859 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.432865 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432871 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432877 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432882 | orchestrator | 2026-03-11 00:56:16.432894 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2026-03-11 00:56:16.432900 | orchestrator | Wednesday 11 March 2026 00:48:48 +0000 (0:00:00.808) 0:03:10.216 ******* 2026-03-11 00:56:16.432906 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.432912 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.432916 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.432920 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432924 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432927 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432931 | orchestrator | 2026-03-11 00:56:16.432935 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2026-03-11 00:56:16.432938 | orchestrator | Wednesday 11 March 2026 00:48:49 +0000 (0:00:00.708) 0:03:10.924 ******* 2026-03-11 00:56:16.432942 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.432946 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.432949 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.432953 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432957 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432960 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432964 | orchestrator | 2026-03-11 00:56:16.432968 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-11 00:56:16.432971 | orchestrator | Wednesday 11 March 2026 00:48:50 +0000 (0:00:00.951) 0:03:11.876 ******* 2026-03-11 00:56:16.432975 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.432979 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.432982 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.432986 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.432990 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.432993 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.432997 | orchestrator | 2026-03-11 00:56:16.433001 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-11 00:56:16.433004 | orchestrator | Wednesday 11 March 2026 00:48:50 +0000 (0:00:00.765) 0:03:12.642 ******* 2026-03-11 00:56:16.433012 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433016 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.433020 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.433024 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.433027 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.433031 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.433034 | orchestrator | 2026-03-11 00:56:16.433038 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-11 00:56:16.433058 | orchestrator | Wednesday 11 March 2026 00:48:51 +0000 (0:00:00.734) 0:03:13.376 ******* 2026-03-11 00:56:16.433062 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433066 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.433070 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.433073 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.433077 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.433081 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.433084 | orchestrator | 2026-03-11 00:56:16.433088 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-11 00:56:16.433092 | orchestrator | Wednesday 11 March 2026 00:48:52 +0000 (0:00:00.576) 0:03:13.953 ******* 2026-03-11 00:56:16.433095 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.433099 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.433103 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.433107 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.433110 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.433114 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.433117 | orchestrator | 2026-03-11 00:56:16.433121 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-11 00:56:16.433125 | orchestrator | Wednesday 11 March 2026 00:48:53 +0000 (0:00:00.957) 0:03:14.911 ******* 2026-03-11 00:56:16.433128 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.433132 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.433136 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.433140 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433143 | orchestrator | 2026-03-11 00:56:16.433147 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-11 00:56:16.433151 | orchestrator | Wednesday 11 March 2026 00:48:53 +0000 (0:00:00.305) 0:03:15.216 ******* 2026-03-11 00:56:16.433154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.433158 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.433161 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.433165 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433169 | orchestrator | 2026-03-11 00:56:16.433172 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-11 00:56:16.433176 | orchestrator | Wednesday 11 March 2026 00:48:53 +0000 (0:00:00.311) 0:03:15.527 ******* 2026-03-11 00:56:16.433180 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.433183 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.433187 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.433191 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433194 | orchestrator | 2026-03-11 00:56:16.433198 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-11 00:56:16.433202 | orchestrator | Wednesday 11 March 2026 00:48:54 +0000 (0:00:00.365) 0:03:15.893 ******* 2026-03-11 00:56:16.433205 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.433209 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.433213 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.433216 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.433220 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.433224 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.433231 | orchestrator | 2026-03-11 00:56:16.433235 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-11 00:56:16.433238 | orchestrator | Wednesday 11 March 2026 00:48:54 +0000 (0:00:00.772) 0:03:16.665 ******* 2026-03-11 00:56:16.433245 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-11 00:56:16.433249 | orchestrator | skipping: [testbed-node-0] => (item=0)  2026-03-11 00:56:16.433252 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-11 00:56:16.433256 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.433260 | orchestrator | skipping: [testbed-node-1] => (item=0)  2026-03-11 00:56:16.433263 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.433267 | orchestrator | skipping: [testbed-node-2] => (item=0)  2026-03-11 00:56:16.433271 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.433274 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-11 00:56:16.433278 | orchestrator | 2026-03-11 00:56:16.433282 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2026-03-11 00:56:16.433285 | orchestrator | Wednesday 11 March 2026 00:48:56 +0000 (0:00:01.767) 0:03:18.433 ******* 2026-03-11 00:56:16.433289 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.433293 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.433296 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.433300 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.433303 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.433307 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.433311 | orchestrator | 2026-03-11 00:56:16.433314 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-11 00:56:16.433318 | orchestrator | Wednesday 11 March 2026 00:48:59 +0000 (0:00:03.019) 0:03:21.452 ******* 2026-03-11 00:56:16.433322 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.433325 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.433329 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.433333 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.433336 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.433340 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.433343 | orchestrator | 2026-03-11 00:56:16.433347 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-11 00:56:16.433351 | orchestrator | Wednesday 11 March 2026 00:49:00 +0000 (0:00:01.307) 0:03:22.760 ******* 2026-03-11 00:56:16.433354 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433358 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.433362 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.433365 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.433369 | orchestrator | 2026-03-11 00:56:16.433373 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-11 00:56:16.433389 | orchestrator | Wednesday 11 March 2026 00:49:01 +0000 (0:00:01.018) 0:03:23.778 ******* 2026-03-11 00:56:16.433393 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.433397 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.433401 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.433404 | orchestrator | 2026-03-11 00:56:16.433408 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-11 00:56:16.433412 | orchestrator | Wednesday 11 March 2026 00:49:02 +0000 (0:00:00.311) 0:03:24.090 ******* 2026-03-11 00:56:16.433415 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.433419 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.433423 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.433426 | orchestrator | 2026-03-11 00:56:16.433430 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-11 00:56:16.433434 | orchestrator | Wednesday 11 March 2026 00:49:03 +0000 (0:00:01.163) 0:03:25.254 ******* 2026-03-11 00:56:16.433438 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-11 00:56:16.433441 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-11 00:56:16.433449 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-11 00:56:16.433452 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.433456 | orchestrator | 2026-03-11 00:56:16.433460 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-11 00:56:16.433464 | orchestrator | Wednesday 11 March 2026 00:49:04 +0000 (0:00:00.972) 0:03:26.227 ******* 2026-03-11 00:56:16.433467 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.433471 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.433475 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.433478 | orchestrator | 2026-03-11 00:56:16.433482 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-11 00:56:16.433486 | orchestrator | Wednesday 11 March 2026 00:49:04 +0000 (0:00:00.374) 0:03:26.601 ******* 2026-03-11 00:56:16.433489 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.433493 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.433497 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.433501 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.433504 | orchestrator | 2026-03-11 00:56:16.433508 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-11 00:56:16.433512 | orchestrator | Wednesday 11 March 2026 00:49:05 +0000 (0:00:01.010) 0:03:27.612 ******* 2026-03-11 00:56:16.433515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.433519 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.433523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.433526 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433530 | orchestrator | 2026-03-11 00:56:16.433534 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-11 00:56:16.433537 | orchestrator | Wednesday 11 March 2026 00:49:06 +0000 (0:00:00.352) 0:03:27.965 ******* 2026-03-11 00:56:16.433541 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433545 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.433548 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.433552 | orchestrator | 2026-03-11 00:56:16.433556 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-11 00:56:16.433560 | orchestrator | Wednesday 11 March 2026 00:49:06 +0000 (0:00:00.297) 0:03:28.263 ******* 2026-03-11 00:56:16.433563 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433567 | orchestrator | 2026-03-11 00:56:16.433571 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-11 00:56:16.433578 | orchestrator | Wednesday 11 March 2026 00:49:06 +0000 (0:00:00.227) 0:03:28.491 ******* 2026-03-11 00:56:16.433581 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433585 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.433589 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.433592 | orchestrator | 2026-03-11 00:56:16.433596 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-11 00:56:16.433600 | orchestrator | Wednesday 11 March 2026 00:49:06 +0000 (0:00:00.312) 0:03:28.804 ******* 2026-03-11 00:56:16.433604 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433607 | orchestrator | 2026-03-11 00:56:16.433611 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-11 00:56:16.433615 | orchestrator | Wednesday 11 March 2026 00:49:07 +0000 (0:00:00.183) 0:03:28.987 ******* 2026-03-11 00:56:16.433618 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433622 | orchestrator | 2026-03-11 00:56:16.433626 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-11 00:56:16.433629 | orchestrator | Wednesday 11 March 2026 00:49:07 +0000 (0:00:00.207) 0:03:29.195 ******* 2026-03-11 00:56:16.433633 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433637 | orchestrator | 2026-03-11 00:56:16.433641 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-11 00:56:16.433647 | orchestrator | Wednesday 11 March 2026 00:49:07 +0000 (0:00:00.104) 0:03:29.300 ******* 2026-03-11 00:56:16.433651 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433655 | orchestrator | 2026-03-11 00:56:16.433659 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-11 00:56:16.433662 | orchestrator | Wednesday 11 March 2026 00:49:08 +0000 (0:00:00.583) 0:03:29.883 ******* 2026-03-11 00:56:16.433666 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433670 | orchestrator | 2026-03-11 00:56:16.433673 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-11 00:56:16.433677 | orchestrator | Wednesday 11 March 2026 00:49:08 +0000 (0:00:00.223) 0:03:30.107 ******* 2026-03-11 00:56:16.433681 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.433684 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.433688 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.433692 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433695 | orchestrator | 2026-03-11 00:56:16.433699 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-11 00:56:16.433755 | orchestrator | Wednesday 11 March 2026 00:49:08 +0000 (0:00:00.375) 0:03:30.482 ******* 2026-03-11 00:56:16.433764 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433770 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.433776 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.433781 | orchestrator | 2026-03-11 00:56:16.433787 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-11 00:56:16.433793 | orchestrator | Wednesday 11 March 2026 00:49:08 +0000 (0:00:00.273) 0:03:30.756 ******* 2026-03-11 00:56:16.433799 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433805 | orchestrator | 2026-03-11 00:56:16.433811 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-11 00:56:16.433815 | orchestrator | Wednesday 11 March 2026 00:49:09 +0000 (0:00:00.231) 0:03:30.988 ******* 2026-03-11 00:56:16.433818 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433822 | orchestrator | 2026-03-11 00:56:16.433826 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-11 00:56:16.433830 | orchestrator | Wednesday 11 March 2026 00:49:09 +0000 (0:00:00.265) 0:03:31.253 ******* 2026-03-11 00:56:16.433833 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.433837 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.433841 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.433844 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.433848 | orchestrator | 2026-03-11 00:56:16.433852 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-11 00:56:16.433855 | orchestrator | Wednesday 11 March 2026 00:49:10 +0000 (0:00:00.897) 0:03:32.151 ******* 2026-03-11 00:56:16.433859 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.433863 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.433866 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.433870 | orchestrator | 2026-03-11 00:56:16.433874 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-11 00:56:16.433877 | orchestrator | Wednesday 11 March 2026 00:49:10 +0000 (0:00:00.318) 0:03:32.469 ******* 2026-03-11 00:56:16.433881 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.433885 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.433888 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.433892 | orchestrator | 2026-03-11 00:56:16.433896 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-11 00:56:16.433900 | orchestrator | Wednesday 11 March 2026 00:49:12 +0000 (0:00:01.469) 0:03:33.939 ******* 2026-03-11 00:56:16.433903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.433907 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.433915 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.433919 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.433923 | orchestrator | 2026-03-11 00:56:16.433926 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-11 00:56:16.433930 | orchestrator | Wednesday 11 March 2026 00:49:12 +0000 (0:00:00.687) 0:03:34.626 ******* 2026-03-11 00:56:16.433936 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.433944 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.433953 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.433958 | orchestrator | 2026-03-11 00:56:16.433964 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-11 00:56:16.433969 | orchestrator | Wednesday 11 March 2026 00:49:13 +0000 (0:00:00.433) 0:03:35.060 ******* 2026-03-11 00:56:16.433975 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.433981 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.433992 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.433998 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.434003 | orchestrator | 2026-03-11 00:56:16.434009 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-11 00:56:16.434049 | orchestrator | Wednesday 11 March 2026 00:49:13 +0000 (0:00:00.712) 0:03:35.773 ******* 2026-03-11 00:56:16.434055 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.434062 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.434066 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.434070 | orchestrator | 2026-03-11 00:56:16.434073 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-11 00:56:16.434077 | orchestrator | Wednesday 11 March 2026 00:49:14 +0000 (0:00:00.464) 0:03:36.237 ******* 2026-03-11 00:56:16.434081 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.434085 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.434088 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.434092 | orchestrator | 2026-03-11 00:56:16.434096 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-11 00:56:16.434099 | orchestrator | Wednesday 11 March 2026 00:49:15 +0000 (0:00:01.284) 0:03:37.521 ******* 2026-03-11 00:56:16.434104 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.434110 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.434116 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.434121 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.434127 | orchestrator | 2026-03-11 00:56:16.434132 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-11 00:56:16.434137 | orchestrator | Wednesday 11 March 2026 00:49:16 +0000 (0:00:00.522) 0:03:38.044 ******* 2026-03-11 00:56:16.434143 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.434148 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.434153 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.434159 | orchestrator | 2026-03-11 00:56:16.434164 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2026-03-11 00:56:16.434169 | orchestrator | Wednesday 11 March 2026 00:49:16 +0000 (0:00:00.259) 0:03:38.304 ******* 2026-03-11 00:56:16.434175 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.434180 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.434185 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.434192 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.434197 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.434233 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.434241 | orchestrator | 2026-03-11 00:56:16.434247 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-11 00:56:16.434254 | orchestrator | Wednesday 11 March 2026 00:49:17 +0000 (0:00:00.659) 0:03:38.963 ******* 2026-03-11 00:56:16.434260 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.434274 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.434280 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.434286 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.434292 | orchestrator | 2026-03-11 00:56:16.434299 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-11 00:56:16.434303 | orchestrator | Wednesday 11 March 2026 00:49:17 +0000 (0:00:00.709) 0:03:39.673 ******* 2026-03-11 00:56:16.434307 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.434311 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.434314 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.434318 | orchestrator | 2026-03-11 00:56:16.434322 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-11 00:56:16.434326 | orchestrator | Wednesday 11 March 2026 00:49:18 +0000 (0:00:00.446) 0:03:40.120 ******* 2026-03-11 00:56:16.434329 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.434333 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.434337 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.434340 | orchestrator | 2026-03-11 00:56:16.434344 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-11 00:56:16.434348 | orchestrator | Wednesday 11 March 2026 00:49:19 +0000 (0:00:01.294) 0:03:41.414 ******* 2026-03-11 00:56:16.434353 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-11 00:56:16.434359 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-11 00:56:16.434365 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-11 00:56:16.434371 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.434376 | orchestrator | 2026-03-11 00:56:16.434382 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-11 00:56:16.434389 | orchestrator | Wednesday 11 March 2026 00:49:20 +0000 (0:00:00.555) 0:03:41.970 ******* 2026-03-11 00:56:16.434395 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.434401 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.434407 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.434412 | orchestrator | 2026-03-11 00:56:16.434419 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2026-03-11 00:56:16.434426 | orchestrator | 2026-03-11 00:56:16.434430 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:16.434433 | orchestrator | Wednesday 11 March 2026 00:49:20 +0000 (0:00:00.512) 0:03:42.483 ******* 2026-03-11 00:56:16.434438 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.434443 | orchestrator | 2026-03-11 00:56:16.434446 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:16.434450 | orchestrator | Wednesday 11 March 2026 00:49:21 +0000 (0:00:00.619) 0:03:43.102 ******* 2026-03-11 00:56:16.434454 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.434457 | orchestrator | 2026-03-11 00:56:16.434461 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:16.434469 | orchestrator | Wednesday 11 March 2026 00:49:21 +0000 (0:00:00.452) 0:03:43.555 ******* 2026-03-11 00:56:16.434473 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.434477 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.434480 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.434484 | orchestrator | 2026-03-11 00:56:16.434488 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:16.434491 | orchestrator | Wednesday 11 March 2026 00:49:22 +0000 (0:00:00.849) 0:03:44.404 ******* 2026-03-11 00:56:16.434497 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.434503 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.434509 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.434515 | orchestrator | 2026-03-11 00:56:16.434526 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:16.434531 | orchestrator | Wednesday 11 March 2026 00:49:22 +0000 (0:00:00.304) 0:03:44.709 ******* 2026-03-11 00:56:16.434538 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.434545 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.434551 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.434557 | orchestrator | 2026-03-11 00:56:16.434563 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:16.434569 | orchestrator | Wednesday 11 March 2026 00:49:23 +0000 (0:00:00.287) 0:03:44.996 ******* 2026-03-11 00:56:16.434575 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.434581 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.434584 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.434588 | orchestrator | 2026-03-11 00:56:16.434592 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:16.434596 | orchestrator | Wednesday 11 March 2026 00:49:23 +0000 (0:00:00.280) 0:03:45.276 ******* 2026-03-11 00:56:16.434599 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.434603 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.434607 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.434610 | orchestrator | 2026-03-11 00:56:16.434614 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:16.434618 | orchestrator | Wednesday 11 March 2026 00:49:24 +0000 (0:00:00.983) 0:03:46.259 ******* 2026-03-11 00:56:16.434621 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.434625 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.434629 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.434632 | orchestrator | 2026-03-11 00:56:16.434636 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:16.434640 | orchestrator | Wednesday 11 March 2026 00:49:24 +0000 (0:00:00.316) 0:03:46.576 ******* 2026-03-11 00:56:16.434663 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.434667 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.434671 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.434675 | orchestrator | 2026-03-11 00:56:16.434679 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:16.434682 | orchestrator | Wednesday 11 March 2026 00:49:25 +0000 (0:00:00.279) 0:03:46.855 ******* 2026-03-11 00:56:16.434686 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.434690 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.434693 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.434697 | orchestrator | 2026-03-11 00:56:16.434701 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:16.434704 | orchestrator | Wednesday 11 March 2026 00:49:25 +0000 (0:00:00.737) 0:03:47.592 ******* 2026-03-11 00:56:16.434708 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.434736 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.434740 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.434743 | orchestrator | 2026-03-11 00:56:16.434747 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:16.434751 | orchestrator | Wednesday 11 March 2026 00:49:26 +0000 (0:00:00.931) 0:03:48.524 ******* 2026-03-11 00:56:16.434755 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.434759 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.434763 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.434767 | orchestrator | 2026-03-11 00:56:16.434770 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:16.434774 | orchestrator | Wednesday 11 March 2026 00:49:26 +0000 (0:00:00.274) 0:03:48.798 ******* 2026-03-11 00:56:16.434778 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.434782 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.434785 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.434789 | orchestrator | 2026-03-11 00:56:16.434793 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:16.434804 | orchestrator | Wednesday 11 March 2026 00:49:27 +0000 (0:00:00.279) 0:03:49.077 ******* 2026-03-11 00:56:16.434808 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.434811 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.434815 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.434819 | orchestrator | 2026-03-11 00:56:16.434823 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:16.434826 | orchestrator | Wednesday 11 March 2026 00:49:27 +0000 (0:00:00.269) 0:03:49.347 ******* 2026-03-11 00:56:16.434830 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.434834 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.434837 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.434841 | orchestrator | 2026-03-11 00:56:16.434845 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:16.434848 | orchestrator | Wednesday 11 March 2026 00:49:27 +0000 (0:00:00.250) 0:03:49.597 ******* 2026-03-11 00:56:16.434852 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.434856 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.434860 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.434863 | orchestrator | 2026-03-11 00:56:16.434867 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:16.434871 | orchestrator | Wednesday 11 March 2026 00:49:28 +0000 (0:00:00.434) 0:03:50.032 ******* 2026-03-11 00:56:16.434874 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.434878 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.434882 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.434885 | orchestrator | 2026-03-11 00:56:16.434889 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:16.434896 | orchestrator | Wednesday 11 March 2026 00:49:28 +0000 (0:00:00.259) 0:03:50.291 ******* 2026-03-11 00:56:16.434901 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.434906 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.434912 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.434918 | orchestrator | 2026-03-11 00:56:16.434923 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:16.434929 | orchestrator | Wednesday 11 March 2026 00:49:28 +0000 (0:00:00.269) 0:03:50.560 ******* 2026-03-11 00:56:16.434935 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.434941 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.434947 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.434952 | orchestrator | 2026-03-11 00:56:16.434958 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:16.434964 | orchestrator | Wednesday 11 March 2026 00:49:29 +0000 (0:00:00.411) 0:03:50.971 ******* 2026-03-11 00:56:16.434969 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.434976 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.434982 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.434989 | orchestrator | 2026-03-11 00:56:16.434994 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:16.435000 | orchestrator | Wednesday 11 March 2026 00:49:29 +0000 (0:00:00.690) 0:03:51.662 ******* 2026-03-11 00:56:16.435006 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.435012 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.435018 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.435023 | orchestrator | 2026-03-11 00:56:16.435029 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2026-03-11 00:56:16.435035 | orchestrator | Wednesday 11 March 2026 00:49:30 +0000 (0:00:00.493) 0:03:52.155 ******* 2026-03-11 00:56:16.435041 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.435047 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.435053 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.435059 | orchestrator | 2026-03-11 00:56:16.435064 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2026-03-11 00:56:16.435071 | orchestrator | Wednesday 11 March 2026 00:49:30 +0000 (0:00:00.357) 0:03:52.513 ******* 2026-03-11 00:56:16.435081 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.435085 | orchestrator | 2026-03-11 00:56:16.435089 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2026-03-11 00:56:16.435093 | orchestrator | Wednesday 11 March 2026 00:49:31 +0000 (0:00:00.713) 0:03:53.227 ******* 2026-03-11 00:56:16.435097 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.435100 | orchestrator | 2026-03-11 00:56:16.435127 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2026-03-11 00:56:16.435131 | orchestrator | Wednesday 11 March 2026 00:49:31 +0000 (0:00:00.142) 0:03:53.369 ******* 2026-03-11 00:56:16.435135 | orchestrator | changed: [testbed-node-0 -> localhost] 2026-03-11 00:56:16.435139 | orchestrator | 2026-03-11 00:56:16.435143 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2026-03-11 00:56:16.435149 | orchestrator | Wednesday 11 March 2026 00:49:32 +0000 (0:00:00.951) 0:03:54.321 ******* 2026-03-11 00:56:16.435155 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.435160 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.435167 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.435172 | orchestrator | 2026-03-11 00:56:16.435178 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2026-03-11 00:56:16.435184 | orchestrator | Wednesday 11 March 2026 00:49:32 +0000 (0:00:00.285) 0:03:54.607 ******* 2026-03-11 00:56:16.435190 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.435195 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.435201 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.435206 | orchestrator | 2026-03-11 00:56:16.435212 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2026-03-11 00:56:16.435218 | orchestrator | Wednesday 11 March 2026 00:49:33 +0000 (0:00:00.326) 0:03:54.934 ******* 2026-03-11 00:56:16.435224 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.435230 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.435237 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.435243 | orchestrator | 2026-03-11 00:56:16.435249 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2026-03-11 00:56:16.435255 | orchestrator | Wednesday 11 March 2026 00:49:34 +0000 (0:00:01.342) 0:03:56.277 ******* 2026-03-11 00:56:16.435261 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.435267 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.435273 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.435280 | orchestrator | 2026-03-11 00:56:16.435285 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2026-03-11 00:56:16.435289 | orchestrator | Wednesday 11 March 2026 00:49:35 +0000 (0:00:00.933) 0:03:57.210 ******* 2026-03-11 00:56:16.435292 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.435296 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.435300 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.435303 | orchestrator | 2026-03-11 00:56:16.435307 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2026-03-11 00:56:16.435311 | orchestrator | Wednesday 11 March 2026 00:49:36 +0000 (0:00:00.767) 0:03:57.979 ******* 2026-03-11 00:56:16.435315 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.435318 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.435322 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.435328 | orchestrator | 2026-03-11 00:56:16.435334 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2026-03-11 00:56:16.435340 | orchestrator | Wednesday 11 March 2026 00:49:37 +0000 (0:00:01.317) 0:03:59.296 ******* 2026-03-11 00:56:16.435356 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.435362 | orchestrator | 2026-03-11 00:56:16.435368 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2026-03-11 00:56:16.435374 | orchestrator | Wednesday 11 March 2026 00:49:39 +0000 (0:00:01.802) 0:04:01.098 ******* 2026-03-11 00:56:16.435381 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.435387 | orchestrator | 2026-03-11 00:56:16.435400 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2026-03-11 00:56:16.435405 | orchestrator | Wednesday 11 March 2026 00:49:39 +0000 (0:00:00.692) 0:04:01.790 ******* 2026-03-11 00:56:16.435411 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 00:56:16.435423 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:16.435427 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:16.435431 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:56:16.435435 | orchestrator | ok: [testbed-node-1] => (item=None) 2026-03-11 00:56:16.435439 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:56:16.435442 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:56:16.435446 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2026-03-11 00:56:16.435449 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:56:16.435453 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2026-03-11 00:56:16.435457 | orchestrator | ok: [testbed-node-2] => (item=None) 2026-03-11 00:56:16.435461 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2026-03-11 00:56:16.435464 | orchestrator | 2026-03-11 00:56:16.435468 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2026-03-11 00:56:16.435472 | orchestrator | Wednesday 11 March 2026 00:49:42 +0000 (0:00:02.804) 0:04:04.594 ******* 2026-03-11 00:56:16.435475 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.435479 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.435483 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.435486 | orchestrator | 2026-03-11 00:56:16.435490 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2026-03-11 00:56:16.435494 | orchestrator | Wednesday 11 March 2026 00:49:43 +0000 (0:00:01.066) 0:04:05.661 ******* 2026-03-11 00:56:16.435497 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.435501 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.435505 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.435508 | orchestrator | 2026-03-11 00:56:16.435512 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2026-03-11 00:56:16.435516 | orchestrator | Wednesday 11 March 2026 00:49:44 +0000 (0:00:00.262) 0:04:05.924 ******* 2026-03-11 00:56:16.435519 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.435523 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.435527 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.435530 | orchestrator | 2026-03-11 00:56:16.435534 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2026-03-11 00:56:16.435538 | orchestrator | Wednesday 11 March 2026 00:49:44 +0000 (0:00:00.486) 0:04:06.410 ******* 2026-03-11 00:56:16.435542 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.435577 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.435585 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.435591 | orchestrator | 2026-03-11 00:56:16.435598 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2026-03-11 00:56:16.435604 | orchestrator | Wednesday 11 March 2026 00:49:46 +0000 (0:00:01.757) 0:04:08.168 ******* 2026-03-11 00:56:16.435610 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.435616 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.435622 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.435628 | orchestrator | 2026-03-11 00:56:16.435635 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2026-03-11 00:56:16.435639 | orchestrator | Wednesday 11 March 2026 00:49:47 +0000 (0:00:01.174) 0:04:09.343 ******* 2026-03-11 00:56:16.435643 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.435647 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.435650 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.435654 | orchestrator | 2026-03-11 00:56:16.435658 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2026-03-11 00:56:16.435666 | orchestrator | Wednesday 11 March 2026 00:49:47 +0000 (0:00:00.356) 0:04:09.700 ******* 2026-03-11 00:56:16.435670 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.435674 | orchestrator | 2026-03-11 00:56:16.435678 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2026-03-11 00:56:16.435681 | orchestrator | Wednesday 11 March 2026 00:49:48 +0000 (0:00:00.722) 0:04:10.423 ******* 2026-03-11 00:56:16.435685 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.435689 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.435692 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.435696 | orchestrator | 2026-03-11 00:56:16.435700 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2026-03-11 00:56:16.435704 | orchestrator | Wednesday 11 March 2026 00:49:48 +0000 (0:00:00.352) 0:04:10.776 ******* 2026-03-11 00:56:16.435729 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.435736 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.435742 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.435748 | orchestrator | 2026-03-11 00:56:16.435754 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2026-03-11 00:56:16.435761 | orchestrator | Wednesday 11 March 2026 00:49:49 +0000 (0:00:00.428) 0:04:11.204 ******* 2026-03-11 00:56:16.435767 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.435774 | orchestrator | 2026-03-11 00:56:16.435779 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2026-03-11 00:56:16.435784 | orchestrator | Wednesday 11 March 2026 00:49:50 +0000 (0:00:00.913) 0:04:12.117 ******* 2026-03-11 00:56:16.435789 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.435794 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.435802 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.435806 | orchestrator | 2026-03-11 00:56:16.435810 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2026-03-11 00:56:16.435813 | orchestrator | Wednesday 11 March 2026 00:49:52 +0000 (0:00:02.297) 0:04:14.415 ******* 2026-03-11 00:56:16.435819 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.435826 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.435831 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.435837 | orchestrator | 2026-03-11 00:56:16.435843 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2026-03-11 00:56:16.435853 | orchestrator | Wednesday 11 March 2026 00:49:54 +0000 (0:00:01.723) 0:04:16.139 ******* 2026-03-11 00:56:16.435860 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.435867 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.435873 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.435879 | orchestrator | 2026-03-11 00:56:16.435886 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2026-03-11 00:56:16.435890 | orchestrator | Wednesday 11 March 2026 00:49:56 +0000 (0:00:02.176) 0:04:18.316 ******* 2026-03-11 00:56:16.435893 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.435897 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.435901 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.435905 | orchestrator | 2026-03-11 00:56:16.435908 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2026-03-11 00:56:16.435912 | orchestrator | Wednesday 11 March 2026 00:49:58 +0000 (0:00:02.401) 0:04:20.717 ******* 2026-03-11 00:56:16.435916 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.435919 | orchestrator | 2026-03-11 00:56:16.435923 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2026-03-11 00:56:16.435927 | orchestrator | Wednesday 11 March 2026 00:49:59 +0000 (0:00:00.485) 0:04:21.203 ******* 2026-03-11 00:56:16.435930 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.435941 | orchestrator | 2026-03-11 00:56:16.435945 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2026-03-11 00:56:16.435949 | orchestrator | Wednesday 11 March 2026 00:50:00 +0000 (0:00:01.104) 0:04:22.307 ******* 2026-03-11 00:56:16.435952 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.435956 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.435960 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.435963 | orchestrator | 2026-03-11 00:56:16.435967 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2026-03-11 00:56:16.435971 | orchestrator | Wednesday 11 March 2026 00:50:10 +0000 (0:00:09.575) 0:04:31.882 ******* 2026-03-11 00:56:16.435974 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.435978 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.435982 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.435985 | orchestrator | 2026-03-11 00:56:16.435989 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2026-03-11 00:56:16.435993 | orchestrator | Wednesday 11 March 2026 00:50:10 +0000 (0:00:00.481) 0:04:32.364 ******* 2026-03-11 00:56:16.436017 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__645f6eaaa4b5ce13cab998cc2bf167a2a09bea1c'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2026-03-11 00:56:16.436023 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__645f6eaaa4b5ce13cab998cc2bf167a2a09bea1c'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2026-03-11 00:56:16.436030 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__645f6eaaa4b5ce13cab998cc2bf167a2a09bea1c'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2026-03-11 00:56:16.436035 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__645f6eaaa4b5ce13cab998cc2bf167a2a09bea1c'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2026-03-11 00:56:16.436040 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__645f6eaaa4b5ce13cab998cc2bf167a2a09bea1c'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2026-03-11 00:56:16.436045 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__645f6eaaa4b5ce13cab998cc2bf167a2a09bea1c'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__645f6eaaa4b5ce13cab998cc2bf167a2a09bea1c'}])  2026-03-11 00:56:16.436050 | orchestrator | 2026-03-11 00:56:16.436054 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-11 00:56:16.436061 | orchestrator | Wednesday 11 March 2026 00:50:24 +0000 (0:00:14.057) 0:04:46.422 ******* 2026-03-11 00:56:16.436065 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436069 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436072 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436080 | orchestrator | 2026-03-11 00:56:16.436083 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2026-03-11 00:56:16.436088 | orchestrator | Wednesday 11 March 2026 00:50:24 +0000 (0:00:00.296) 0:04:46.718 ******* 2026-03-11 00:56:16.436094 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.436099 | orchestrator | 2026-03-11 00:56:16.436105 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2026-03-11 00:56:16.436110 | orchestrator | Wednesday 11 March 2026 00:50:25 +0000 (0:00:00.696) 0:04:47.415 ******* 2026-03-11 00:56:16.436116 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.436122 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.436128 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.436134 | orchestrator | 2026-03-11 00:56:16.436139 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2026-03-11 00:56:16.436145 | orchestrator | Wednesday 11 March 2026 00:50:25 +0000 (0:00:00.310) 0:04:47.726 ******* 2026-03-11 00:56:16.436150 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436156 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436162 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436167 | orchestrator | 2026-03-11 00:56:16.436173 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2026-03-11 00:56:16.436179 | orchestrator | Wednesday 11 March 2026 00:50:26 +0000 (0:00:00.282) 0:04:48.008 ******* 2026-03-11 00:56:16.436185 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-11 00:56:16.436191 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-11 00:56:16.436197 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-11 00:56:16.436202 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436208 | orchestrator | 2026-03-11 00:56:16.436214 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2026-03-11 00:56:16.436220 | orchestrator | Wednesday 11 March 2026 00:50:26 +0000 (0:00:00.604) 0:04:48.612 ******* 2026-03-11 00:56:16.436226 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.436232 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.436240 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.436244 | orchestrator | 2026-03-11 00:56:16.436248 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2026-03-11 00:56:16.436251 | orchestrator | 2026-03-11 00:56:16.436273 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:16.436277 | orchestrator | Wednesday 11 March 2026 00:50:27 +0000 (0:00:00.657) 0:04:49.270 ******* 2026-03-11 00:56:16.436281 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.436286 | orchestrator | 2026-03-11 00:56:16.436290 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:16.436294 | orchestrator | Wednesday 11 March 2026 00:50:27 +0000 (0:00:00.471) 0:04:49.742 ******* 2026-03-11 00:56:16.436297 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.436301 | orchestrator | 2026-03-11 00:56:16.436305 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:16.436309 | orchestrator | Wednesday 11 March 2026 00:50:28 +0000 (0:00:00.609) 0:04:50.351 ******* 2026-03-11 00:56:16.436312 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.436316 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.436320 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.436324 | orchestrator | 2026-03-11 00:56:16.436327 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:16.436331 | orchestrator | Wednesday 11 March 2026 00:50:29 +0000 (0:00:00.660) 0:04:51.011 ******* 2026-03-11 00:56:16.436335 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436340 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436352 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436358 | orchestrator | 2026-03-11 00:56:16.436364 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:16.436370 | orchestrator | Wednesday 11 March 2026 00:50:29 +0000 (0:00:00.235) 0:04:51.246 ******* 2026-03-11 00:56:16.436376 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436382 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436388 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436393 | orchestrator | 2026-03-11 00:56:16.436399 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:16.436406 | orchestrator | Wednesday 11 March 2026 00:50:29 +0000 (0:00:00.409) 0:04:51.655 ******* 2026-03-11 00:56:16.436412 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436418 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436425 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436431 | orchestrator | 2026-03-11 00:56:16.436437 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:16.436443 | orchestrator | Wednesday 11 March 2026 00:50:30 +0000 (0:00:00.223) 0:04:51.879 ******* 2026-03-11 00:56:16.436450 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.436455 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.436459 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.436463 | orchestrator | 2026-03-11 00:56:16.436466 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:16.436470 | orchestrator | Wednesday 11 March 2026 00:50:30 +0000 (0:00:00.613) 0:04:52.492 ******* 2026-03-11 00:56:16.436474 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436477 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436481 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436485 | orchestrator | 2026-03-11 00:56:16.436488 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:16.436492 | orchestrator | Wednesday 11 March 2026 00:50:30 +0000 (0:00:00.217) 0:04:52.710 ******* 2026-03-11 00:56:16.436500 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436504 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436508 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436513 | orchestrator | 2026-03-11 00:56:16.436519 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:16.436525 | orchestrator | Wednesday 11 March 2026 00:50:31 +0000 (0:00:00.517) 0:04:53.227 ******* 2026-03-11 00:56:16.436530 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.436536 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.436542 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.436548 | orchestrator | 2026-03-11 00:56:16.436554 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:16.436561 | orchestrator | Wednesday 11 March 2026 00:50:31 +0000 (0:00:00.525) 0:04:53.753 ******* 2026-03-11 00:56:16.436567 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.436573 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.436579 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.436584 | orchestrator | 2026-03-11 00:56:16.436590 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:16.436596 | orchestrator | Wednesday 11 March 2026 00:50:32 +0000 (0:00:00.617) 0:04:54.370 ******* 2026-03-11 00:56:16.436600 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436603 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436607 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436611 | orchestrator | 2026-03-11 00:56:16.436614 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:16.436618 | orchestrator | Wednesday 11 March 2026 00:50:32 +0000 (0:00:00.217) 0:04:54.587 ******* 2026-03-11 00:56:16.436622 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.436625 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.436629 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.436633 | orchestrator | 2026-03-11 00:56:16.436641 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:16.436645 | orchestrator | Wednesday 11 March 2026 00:50:33 +0000 (0:00:00.250) 0:04:54.838 ******* 2026-03-11 00:56:16.436648 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436652 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436656 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436660 | orchestrator | 2026-03-11 00:56:16.436663 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:16.436667 | orchestrator | Wednesday 11 March 2026 00:50:33 +0000 (0:00:00.383) 0:04:55.221 ******* 2026-03-11 00:56:16.436671 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436674 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436698 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436702 | orchestrator | 2026-03-11 00:56:16.436706 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:16.436748 | orchestrator | Wednesday 11 March 2026 00:50:33 +0000 (0:00:00.228) 0:04:55.449 ******* 2026-03-11 00:56:16.436753 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436757 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436761 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436764 | orchestrator | 2026-03-11 00:56:16.436768 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:16.436772 | orchestrator | Wednesday 11 March 2026 00:50:33 +0000 (0:00:00.236) 0:04:55.686 ******* 2026-03-11 00:56:16.436775 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436779 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436783 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436786 | orchestrator | 2026-03-11 00:56:16.436790 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:16.436794 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:00.250) 0:04:55.937 ******* 2026-03-11 00:56:16.436798 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436801 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436805 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436809 | orchestrator | 2026-03-11 00:56:16.436813 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:16.436816 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:00.437) 0:04:56.374 ******* 2026-03-11 00:56:16.436820 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.436824 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.436828 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.436831 | orchestrator | 2026-03-11 00:56:16.436835 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:16.436839 | orchestrator | Wednesday 11 March 2026 00:50:34 +0000 (0:00:00.278) 0:04:56.653 ******* 2026-03-11 00:56:16.436843 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.436846 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.436850 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.436854 | orchestrator | 2026-03-11 00:56:16.436857 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:16.436861 | orchestrator | Wednesday 11 March 2026 00:50:35 +0000 (0:00:00.265) 0:04:56.918 ******* 2026-03-11 00:56:16.436865 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.436869 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.436872 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.436876 | orchestrator | 2026-03-11 00:56:16.436880 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2026-03-11 00:56:16.436883 | orchestrator | Wednesday 11 March 2026 00:50:35 +0000 (0:00:00.653) 0:04:57.571 ******* 2026-03-11 00:56:16.436887 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-11 00:56:16.436891 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:56:16.436895 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:56:16.436903 | orchestrator | 2026-03-11 00:56:16.436907 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2026-03-11 00:56:16.436911 | orchestrator | Wednesday 11 March 2026 00:50:36 +0000 (0:00:00.571) 0:04:58.142 ******* 2026-03-11 00:56:16.436915 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.436919 | orchestrator | 2026-03-11 00:56:16.436922 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2026-03-11 00:56:16.436930 | orchestrator | Wednesday 11 March 2026 00:50:36 +0000 (0:00:00.495) 0:04:58.638 ******* 2026-03-11 00:56:16.436934 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.436937 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.436941 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.436945 | orchestrator | 2026-03-11 00:56:16.436949 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2026-03-11 00:56:16.436952 | orchestrator | Wednesday 11 March 2026 00:50:37 +0000 (0:00:00.600) 0:04:59.238 ******* 2026-03-11 00:56:16.436956 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.436960 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.436964 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.436967 | orchestrator | 2026-03-11 00:56:16.436971 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2026-03-11 00:56:16.436976 | orchestrator | Wednesday 11 March 2026 00:50:37 +0000 (0:00:00.449) 0:04:59.688 ******* 2026-03-11 00:56:16.436982 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 00:56:16.436988 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 00:56:16.436994 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 00:56:16.436999 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2026-03-11 00:56:16.437005 | orchestrator | 2026-03-11 00:56:16.437011 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2026-03-11 00:56:16.437016 | orchestrator | Wednesday 11 March 2026 00:50:47 +0000 (0:00:09.985) 0:05:09.674 ******* 2026-03-11 00:56:16.437022 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.437028 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.437034 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.437040 | orchestrator | 2026-03-11 00:56:16.437046 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2026-03-11 00:56:16.437052 | orchestrator | Wednesday 11 March 2026 00:50:48 +0000 (0:00:00.341) 0:05:10.015 ******* 2026-03-11 00:56:16.437058 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-11 00:56:16.437064 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-11 00:56:16.437071 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-11 00:56:16.437077 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-11 00:56:16.437081 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:16.437085 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:16.437089 | orchestrator | 2026-03-11 00:56:16.437110 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2026-03-11 00:56:16.437115 | orchestrator | Wednesday 11 March 2026 00:50:50 +0000 (0:00:01.927) 0:05:11.942 ******* 2026-03-11 00:56:16.437119 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-11 00:56:16.437123 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-11 00:56:16.437126 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-11 00:56:16.437130 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 00:56:16.437134 | orchestrator | changed: [testbed-node-1] => (item=None) 2026-03-11 00:56:16.437137 | orchestrator | changed: [testbed-node-2] => (item=None) 2026-03-11 00:56:16.437143 | orchestrator | 2026-03-11 00:56:16.437150 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2026-03-11 00:56:16.437156 | orchestrator | Wednesday 11 March 2026 00:50:51 +0000 (0:00:01.206) 0:05:13.149 ******* 2026-03-11 00:56:16.437172 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.437179 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.437185 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.437191 | orchestrator | 2026-03-11 00:56:16.437197 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2026-03-11 00:56:16.437205 | orchestrator | Wednesday 11 March 2026 00:50:52 +0000 (0:00:01.063) 0:05:14.212 ******* 2026-03-11 00:56:16.437209 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.437212 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.437216 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.437219 | orchestrator | 2026-03-11 00:56:16.437224 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2026-03-11 00:56:16.437231 | orchestrator | Wednesday 11 March 2026 00:50:52 +0000 (0:00:00.321) 0:05:14.534 ******* 2026-03-11 00:56:16.437237 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.437243 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.437248 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.437254 | orchestrator | 2026-03-11 00:56:16.437260 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2026-03-11 00:56:16.437266 | orchestrator | Wednesday 11 March 2026 00:50:53 +0000 (0:00:00.297) 0:05:14.832 ******* 2026-03-11 00:56:16.437273 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.437279 | orchestrator | 2026-03-11 00:56:16.437285 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2026-03-11 00:56:16.437291 | orchestrator | Wednesday 11 March 2026 00:50:53 +0000 (0:00:00.691) 0:05:15.523 ******* 2026-03-11 00:56:16.437297 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.437302 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.437306 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.437310 | orchestrator | 2026-03-11 00:56:16.437314 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2026-03-11 00:56:16.437317 | orchestrator | Wednesday 11 March 2026 00:50:53 +0000 (0:00:00.278) 0:05:15.802 ******* 2026-03-11 00:56:16.437321 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.437325 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.437329 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.437332 | orchestrator | 2026-03-11 00:56:16.437336 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2026-03-11 00:56:16.437340 | orchestrator | Wednesday 11 March 2026 00:50:54 +0000 (0:00:00.289) 0:05:16.092 ******* 2026-03-11 00:56:16.437344 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.437347 | orchestrator | 2026-03-11 00:56:16.437357 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2026-03-11 00:56:16.437363 | orchestrator | Wednesday 11 March 2026 00:50:54 +0000 (0:00:00.598) 0:05:16.690 ******* 2026-03-11 00:56:16.437369 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.437374 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.437380 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.437386 | orchestrator | 2026-03-11 00:56:16.437392 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2026-03-11 00:56:16.437399 | orchestrator | Wednesday 11 March 2026 00:50:56 +0000 (0:00:01.189) 0:05:17.879 ******* 2026-03-11 00:56:16.437404 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.437410 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.437415 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.437421 | orchestrator | 2026-03-11 00:56:16.437426 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2026-03-11 00:56:16.437431 | orchestrator | Wednesday 11 March 2026 00:50:57 +0000 (0:00:00.985) 0:05:18.865 ******* 2026-03-11 00:56:16.437437 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.437443 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.437448 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.437459 | orchestrator | 2026-03-11 00:56:16.437465 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2026-03-11 00:56:16.437471 | orchestrator | Wednesday 11 March 2026 00:50:58 +0000 (0:00:01.805) 0:05:20.670 ******* 2026-03-11 00:56:16.437476 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.437482 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.437488 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.437493 | orchestrator | 2026-03-11 00:56:16.437499 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2026-03-11 00:56:16.437505 | orchestrator | Wednesday 11 March 2026 00:51:01 +0000 (0:00:02.187) 0:05:22.858 ******* 2026-03-11 00:56:16.437512 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.437518 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.437524 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2026-03-11 00:56:16.437530 | orchestrator | 2026-03-11 00:56:16.437536 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2026-03-11 00:56:16.437542 | orchestrator | Wednesday 11 March 2026 00:51:01 +0000 (0:00:00.596) 0:05:23.454 ******* 2026-03-11 00:56:16.437548 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2026-03-11 00:56:16.437580 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2026-03-11 00:56:16.437586 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2026-03-11 00:56:16.437592 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2026-03-11 00:56:16.437602 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2026-03-11 00:56:16.437612 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2026-03-11 00:56:16.437618 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:16.437624 | orchestrator | 2026-03-11 00:56:16.437630 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2026-03-11 00:56:16.437635 | orchestrator | Wednesday 11 March 2026 00:51:37 +0000 (0:00:36.185) 0:05:59.639 ******* 2026-03-11 00:56:16.437641 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:16.437646 | orchestrator | 2026-03-11 00:56:16.437652 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2026-03-11 00:56:16.437658 | orchestrator | Wednesday 11 March 2026 00:51:39 +0000 (0:00:01.482) 0:06:01.123 ******* 2026-03-11 00:56:16.437664 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.437670 | orchestrator | 2026-03-11 00:56:16.437676 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2026-03-11 00:56:16.437682 | orchestrator | Wednesday 11 March 2026 00:51:39 +0000 (0:00:00.299) 0:06:01.423 ******* 2026-03-11 00:56:16.437688 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.437694 | orchestrator | 2026-03-11 00:56:16.437698 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2026-03-11 00:56:16.437702 | orchestrator | Wednesday 11 March 2026 00:51:39 +0000 (0:00:00.145) 0:06:01.568 ******* 2026-03-11 00:56:16.437705 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2026-03-11 00:56:16.437733 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2026-03-11 00:56:16.437739 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2026-03-11 00:56:16.437744 | orchestrator | 2026-03-11 00:56:16.437750 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2026-03-11 00:56:16.437756 | orchestrator | Wednesday 11 March 2026 00:51:46 +0000 (0:00:06.738) 0:06:08.307 ******* 2026-03-11 00:56:16.437762 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2026-03-11 00:56:16.437776 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2026-03-11 00:56:16.437782 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2026-03-11 00:56:16.437789 | orchestrator | skipping: [testbed-node-2] => (item=status)  2026-03-11 00:56:16.437794 | orchestrator | 2026-03-11 00:56:16.437800 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-11 00:56:16.437805 | orchestrator | Wednesday 11 March 2026 00:51:51 +0000 (0:00:05.089) 0:06:13.396 ******* 2026-03-11 00:56:16.437811 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.437816 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.437821 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.437826 | orchestrator | 2026-03-11 00:56:16.437837 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2026-03-11 00:56:16.437843 | orchestrator | Wednesday 11 March 2026 00:51:52 +0000 (0:00:00.642) 0:06:14.039 ******* 2026-03-11 00:56:16.437848 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.437854 | orchestrator | 2026-03-11 00:56:16.437860 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2026-03-11 00:56:16.437866 | orchestrator | Wednesday 11 March 2026 00:51:52 +0000 (0:00:00.736) 0:06:14.776 ******* 2026-03-11 00:56:16.437873 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.437879 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.437884 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.437890 | orchestrator | 2026-03-11 00:56:16.437897 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2026-03-11 00:56:16.437901 | orchestrator | Wednesday 11 March 2026 00:51:53 +0000 (0:00:00.346) 0:06:15.123 ******* 2026-03-11 00:56:16.437905 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.437908 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.437912 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.437916 | orchestrator | 2026-03-11 00:56:16.437919 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2026-03-11 00:56:16.437923 | orchestrator | Wednesday 11 March 2026 00:51:54 +0000 (0:00:01.203) 0:06:16.326 ******* 2026-03-11 00:56:16.437927 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2026-03-11 00:56:16.437930 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2026-03-11 00:56:16.437934 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2026-03-11 00:56:16.437938 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.437942 | orchestrator | 2026-03-11 00:56:16.437945 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2026-03-11 00:56:16.437949 | orchestrator | Wednesday 11 March 2026 00:51:55 +0000 (0:00:00.599) 0:06:16.925 ******* 2026-03-11 00:56:16.437953 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.437956 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.437960 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.437964 | orchestrator | 2026-03-11 00:56:16.437967 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2026-03-11 00:56:16.437971 | orchestrator | 2026-03-11 00:56:16.437975 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:16.437978 | orchestrator | Wednesday 11 March 2026 00:51:55 +0000 (0:00:00.841) 0:06:17.767 ******* 2026-03-11 00:56:16.438004 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.438009 | orchestrator | 2026-03-11 00:56:16.438049 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:16.438054 | orchestrator | Wednesday 11 March 2026 00:51:56 +0000 (0:00:00.538) 0:06:18.305 ******* 2026-03-11 00:56:16.438057 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.438062 | orchestrator | 2026-03-11 00:56:16.438065 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:16.438074 | orchestrator | Wednesday 11 March 2026 00:51:57 +0000 (0:00:00.737) 0:06:19.043 ******* 2026-03-11 00:56:16.438078 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.438084 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.438091 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.438096 | orchestrator | 2026-03-11 00:56:16.438102 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:16.438107 | orchestrator | Wednesday 11 March 2026 00:51:57 +0000 (0:00:00.339) 0:06:19.382 ******* 2026-03-11 00:56:16.438113 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.438119 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.438124 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.438130 | orchestrator | 2026-03-11 00:56:16.438136 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:16.438143 | orchestrator | Wednesday 11 March 2026 00:51:58 +0000 (0:00:00.637) 0:06:20.019 ******* 2026-03-11 00:56:16.438149 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.438155 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.438161 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.438167 | orchestrator | 2026-03-11 00:56:16.438174 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:16.438180 | orchestrator | Wednesday 11 March 2026 00:51:58 +0000 (0:00:00.651) 0:06:20.671 ******* 2026-03-11 00:56:16.438183 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.438187 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.438191 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.438194 | orchestrator | 2026-03-11 00:56:16.438198 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:16.438202 | orchestrator | Wednesday 11 March 2026 00:51:59 +0000 (0:00:01.004) 0:06:21.675 ******* 2026-03-11 00:56:16.438205 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.438209 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.438213 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.438216 | orchestrator | 2026-03-11 00:56:16.438220 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:16.438224 | orchestrator | Wednesday 11 March 2026 00:52:00 +0000 (0:00:00.327) 0:06:22.003 ******* 2026-03-11 00:56:16.438228 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.438231 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.438235 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.438239 | orchestrator | 2026-03-11 00:56:16.438242 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:16.438246 | orchestrator | Wednesday 11 March 2026 00:52:00 +0000 (0:00:00.330) 0:06:22.334 ******* 2026-03-11 00:56:16.438250 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.438254 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.438257 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.438261 | orchestrator | 2026-03-11 00:56:16.438265 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:16.438273 | orchestrator | Wednesday 11 March 2026 00:52:00 +0000 (0:00:00.299) 0:06:22.633 ******* 2026-03-11 00:56:16.438277 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.438281 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.438284 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.438288 | orchestrator | 2026-03-11 00:56:16.438292 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:16.438296 | orchestrator | Wednesday 11 March 2026 00:52:01 +0000 (0:00:01.004) 0:06:23.638 ******* 2026-03-11 00:56:16.438302 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.438308 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.438313 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.438319 | orchestrator | 2026-03-11 00:56:16.438324 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:16.438330 | orchestrator | Wednesday 11 March 2026 00:52:02 +0000 (0:00:00.734) 0:06:24.372 ******* 2026-03-11 00:56:16.438342 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.438347 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.438354 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.438360 | orchestrator | 2026-03-11 00:56:16.438367 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:16.438373 | orchestrator | Wednesday 11 March 2026 00:52:02 +0000 (0:00:00.301) 0:06:24.673 ******* 2026-03-11 00:56:16.438379 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.438385 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.438391 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.438397 | orchestrator | 2026-03-11 00:56:16.438403 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:16.438409 | orchestrator | Wednesday 11 March 2026 00:52:03 +0000 (0:00:00.299) 0:06:24.972 ******* 2026-03-11 00:56:16.438415 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.438419 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.438423 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.438426 | orchestrator | 2026-03-11 00:56:16.438431 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:16.438438 | orchestrator | Wednesday 11 March 2026 00:52:03 +0000 (0:00:00.572) 0:06:25.545 ******* 2026-03-11 00:56:16.438444 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.438450 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.438456 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.438463 | orchestrator | 2026-03-11 00:56:16.438469 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:16.438476 | orchestrator | Wednesday 11 March 2026 00:52:04 +0000 (0:00:00.324) 0:06:25.869 ******* 2026-03-11 00:56:16.438482 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.438488 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.438498 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.438502 | orchestrator | 2026-03-11 00:56:16.438506 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:16.438509 | orchestrator | Wednesday 11 March 2026 00:52:04 +0000 (0:00:00.322) 0:06:26.192 ******* 2026-03-11 00:56:16.438513 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.438517 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.438523 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.438530 | orchestrator | 2026-03-11 00:56:16.438536 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:16.438541 | orchestrator | Wednesday 11 March 2026 00:52:04 +0000 (0:00:00.303) 0:06:26.496 ******* 2026-03-11 00:56:16.438547 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.438553 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.438559 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.438566 | orchestrator | 2026-03-11 00:56:16.438572 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:16.438578 | orchestrator | Wednesday 11 March 2026 00:52:05 +0000 (0:00:00.587) 0:06:27.084 ******* 2026-03-11 00:56:16.438584 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.438590 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.438595 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.438598 | orchestrator | 2026-03-11 00:56:16.438620 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:16.438624 | orchestrator | Wednesday 11 March 2026 00:52:05 +0000 (0:00:00.398) 0:06:27.482 ******* 2026-03-11 00:56:16.438627 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.438631 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.438635 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.438638 | orchestrator | 2026-03-11 00:56:16.438642 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:16.438646 | orchestrator | Wednesday 11 March 2026 00:52:06 +0000 (0:00:00.348) 0:06:27.831 ******* 2026-03-11 00:56:16.438649 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.438662 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.438666 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.438670 | orchestrator | 2026-03-11 00:56:16.438673 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2026-03-11 00:56:16.438677 | orchestrator | Wednesday 11 March 2026 00:52:06 +0000 (0:00:00.545) 0:06:28.376 ******* 2026-03-11 00:56:16.438681 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.438685 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.438688 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.438692 | orchestrator | 2026-03-11 00:56:16.438696 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2026-03-11 00:56:16.438700 | orchestrator | Wednesday 11 March 2026 00:52:07 +0000 (0:00:00.600) 0:06:28.976 ******* 2026-03-11 00:56:16.438703 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:56:16.438707 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:56:16.438730 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:56:16.438734 | orchestrator | 2026-03-11 00:56:16.438738 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2026-03-11 00:56:16.438741 | orchestrator | Wednesday 11 March 2026 00:52:07 +0000 (0:00:00.621) 0:06:29.598 ******* 2026-03-11 00:56:16.438745 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.438749 | orchestrator | 2026-03-11 00:56:16.438758 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2026-03-11 00:56:16.438762 | orchestrator | Wednesday 11 March 2026 00:52:08 +0000 (0:00:00.531) 0:06:30.130 ******* 2026-03-11 00:56:16.438765 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.438769 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.438775 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.438781 | orchestrator | 2026-03-11 00:56:16.438787 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2026-03-11 00:56:16.438793 | orchestrator | Wednesday 11 March 2026 00:52:08 +0000 (0:00:00.591) 0:06:30.721 ******* 2026-03-11 00:56:16.438799 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.438805 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.438820 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.438826 | orchestrator | 2026-03-11 00:56:16.438833 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2026-03-11 00:56:16.438838 | orchestrator | Wednesday 11 March 2026 00:52:09 +0000 (0:00:00.297) 0:06:31.019 ******* 2026-03-11 00:56:16.438842 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.438846 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.438850 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.438853 | orchestrator | 2026-03-11 00:56:16.438857 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2026-03-11 00:56:16.438861 | orchestrator | Wednesday 11 March 2026 00:52:09 +0000 (0:00:00.662) 0:06:31.682 ******* 2026-03-11 00:56:16.438864 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.438868 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.438872 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.438875 | orchestrator | 2026-03-11 00:56:16.438879 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2026-03-11 00:56:16.438883 | orchestrator | Wednesday 11 March 2026 00:52:10 +0000 (0:00:00.326) 0:06:32.008 ******* 2026-03-11 00:56:16.438887 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-11 00:56:16.438891 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-11 00:56:16.438895 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2026-03-11 00:56:16.438899 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-11 00:56:16.438908 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-11 00:56:16.438921 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-11 00:56:16.438925 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-11 00:56:16.438929 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-11 00:56:16.438932 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-11 00:56:16.438936 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-11 00:56:16.438940 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2026-03-11 00:56:16.438943 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-11 00:56:16.438947 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2026-03-11 00:56:16.438951 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2026-03-11 00:56:16.438954 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2026-03-11 00:56:16.438958 | orchestrator | 2026-03-11 00:56:16.438962 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2026-03-11 00:56:16.438965 | orchestrator | Wednesday 11 March 2026 00:52:14 +0000 (0:00:04.505) 0:06:36.514 ******* 2026-03-11 00:56:16.438969 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.438973 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.438976 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.438980 | orchestrator | 2026-03-11 00:56:16.438984 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2026-03-11 00:56:16.438987 | orchestrator | Wednesday 11 March 2026 00:52:14 +0000 (0:00:00.289) 0:06:36.804 ******* 2026-03-11 00:56:16.438991 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.438995 | orchestrator | 2026-03-11 00:56:16.438999 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2026-03-11 00:56:16.439002 | orchestrator | Wednesday 11 March 2026 00:52:15 +0000 (0:00:00.493) 0:06:37.297 ******* 2026-03-11 00:56:16.439006 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-11 00:56:16.439010 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-11 00:56:16.439014 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2026-03-11 00:56:16.439018 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2026-03-11 00:56:16.439022 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2026-03-11 00:56:16.439026 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2026-03-11 00:56:16.439029 | orchestrator | 2026-03-11 00:56:16.439033 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2026-03-11 00:56:16.439037 | orchestrator | Wednesday 11 March 2026 00:52:16 +0000 (0:00:01.397) 0:06:38.694 ******* 2026-03-11 00:56:16.439040 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:16.439044 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-11 00:56:16.439050 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:56:16.439056 | orchestrator | 2026-03-11 00:56:16.439073 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2026-03-11 00:56:16.439079 | orchestrator | Wednesday 11 March 2026 00:52:19 +0000 (0:00:02.465) 0:06:41.160 ******* 2026-03-11 00:56:16.439085 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 00:56:16.439092 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-11 00:56:16.439097 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.439103 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 00:56:16.439114 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-11 00:56:16.439119 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.439125 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 00:56:16.439131 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-11 00:56:16.439136 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.439142 | orchestrator | 2026-03-11 00:56:16.439148 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2026-03-11 00:56:16.439153 | orchestrator | Wednesday 11 March 2026 00:52:20 +0000 (0:00:01.409) 0:06:42.569 ******* 2026-03-11 00:56:16.439159 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:16.439165 | orchestrator | 2026-03-11 00:56:16.439171 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2026-03-11 00:56:16.439177 | orchestrator | Wednesday 11 March 2026 00:52:22 +0000 (0:00:01.917) 0:06:44.487 ******* 2026-03-11 00:56:16.439183 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.439189 | orchestrator | 2026-03-11 00:56:16.439194 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2026-03-11 00:56:16.439200 | orchestrator | Wednesday 11 March 2026 00:52:23 +0000 (0:00:00.536) 0:06:45.024 ******* 2026-03-11 00:56:16.439207 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-c12a1925-beca-5a04-a9cd-b492500b7146', 'data_vg': 'ceph-c12a1925-beca-5a04-a9cd-b492500b7146'}) 2026-03-11 00:56:16.439215 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2fb06152-6c58-5f9b-bb14-a51d715c3982', 'data_vg': 'ceph-2fb06152-6c58-5f9b-bb14-a51d715c3982'}) 2026-03-11 00:56:16.439226 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-71564836-6f16-509c-9c2d-06150302b625', 'data_vg': 'ceph-71564836-6f16-509c-9c2d-06150302b625'}) 2026-03-11 00:56:16.439232 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-20faa7ec-42ec-56bc-96e8-0b7388032f08', 'data_vg': 'ceph-20faa7ec-42ec-56bc-96e8-0b7388032f08'}) 2026-03-11 00:56:16.439238 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2e0b0e2c-c482-530c-847f-054ffec93e8e', 'data_vg': 'ceph-2e0b0e2c-c482-530c-847f-054ffec93e8e'}) 2026-03-11 00:56:16.439245 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-75b18a9f-434b-5575-8ed7-e1e8868eceb5', 'data_vg': 'ceph-75b18a9f-434b-5575-8ed7-e1e8868eceb5'}) 2026-03-11 00:56:16.439251 | orchestrator | 2026-03-11 00:56:16.439257 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2026-03-11 00:56:16.439264 | orchestrator | Wednesday 11 March 2026 00:53:03 +0000 (0:00:39.916) 0:07:24.940 ******* 2026-03-11 00:56:16.439270 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439275 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.439279 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.439283 | orchestrator | 2026-03-11 00:56:16.439287 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2026-03-11 00:56:16.439290 | orchestrator | Wednesday 11 March 2026 00:53:03 +0000 (0:00:00.335) 0:07:25.276 ******* 2026-03-11 00:56:16.439294 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.439298 | orchestrator | 2026-03-11 00:56:16.439302 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2026-03-11 00:56:16.439306 | orchestrator | Wednesday 11 March 2026 00:53:03 +0000 (0:00:00.509) 0:07:25.786 ******* 2026-03-11 00:56:16.439309 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.439313 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.439317 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.439320 | orchestrator | 2026-03-11 00:56:16.439324 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2026-03-11 00:56:16.439328 | orchestrator | Wednesday 11 March 2026 00:53:04 +0000 (0:00:00.973) 0:07:26.759 ******* 2026-03-11 00:56:16.439332 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.439340 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.439344 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.439347 | orchestrator | 2026-03-11 00:56:16.439351 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2026-03-11 00:56:16.439355 | orchestrator | Wednesday 11 March 2026 00:53:07 +0000 (0:00:02.688) 0:07:29.447 ******* 2026-03-11 00:56:16.439359 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.439362 | orchestrator | 2026-03-11 00:56:16.439366 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2026-03-11 00:56:16.439370 | orchestrator | Wednesday 11 March 2026 00:53:08 +0000 (0:00:00.586) 0:07:30.034 ******* 2026-03-11 00:56:16.439373 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.439377 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.439381 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.439384 | orchestrator | 2026-03-11 00:56:16.439388 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2026-03-11 00:56:16.439392 | orchestrator | Wednesday 11 March 2026 00:53:09 +0000 (0:00:01.558) 0:07:31.593 ******* 2026-03-11 00:56:16.439395 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.439403 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.439407 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.439411 | orchestrator | 2026-03-11 00:56:16.439414 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2026-03-11 00:56:16.439418 | orchestrator | Wednesday 11 March 2026 00:53:10 +0000 (0:00:01.203) 0:07:32.796 ******* 2026-03-11 00:56:16.439422 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.439425 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.439429 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.439433 | orchestrator | 2026-03-11 00:56:16.439436 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2026-03-11 00:56:16.439440 | orchestrator | Wednesday 11 March 2026 00:53:12 +0000 (0:00:01.805) 0:07:34.601 ******* 2026-03-11 00:56:16.439444 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439447 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.439451 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.439455 | orchestrator | 2026-03-11 00:56:16.439458 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2026-03-11 00:56:16.439462 | orchestrator | Wednesday 11 March 2026 00:53:13 +0000 (0:00:00.308) 0:07:34.910 ******* 2026-03-11 00:56:16.439466 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439469 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.439473 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.439477 | orchestrator | 2026-03-11 00:56:16.439480 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2026-03-11 00:56:16.439484 | orchestrator | Wednesday 11 March 2026 00:53:13 +0000 (0:00:00.572) 0:07:35.483 ******* 2026-03-11 00:56:16.439488 | orchestrator | ok: [testbed-node-3] => (item=5) 2026-03-11 00:56:16.439491 | orchestrator | ok: [testbed-node-4] => (item=1) 2026-03-11 00:56:16.439495 | orchestrator | ok: [testbed-node-5] => (item=2) 2026-03-11 00:56:16.439499 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-11 00:56:16.439502 | orchestrator | ok: [testbed-node-4] => (item=3) 2026-03-11 00:56:16.439506 | orchestrator | ok: [testbed-node-5] => (item=4) 2026-03-11 00:56:16.439510 | orchestrator | 2026-03-11 00:56:16.439513 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2026-03-11 00:56:16.439517 | orchestrator | Wednesday 11 March 2026 00:53:14 +0000 (0:00:01.210) 0:07:36.694 ******* 2026-03-11 00:56:16.439521 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-11 00:56:16.439525 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-11 00:56:16.439528 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-11 00:56:16.439532 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-11 00:56:16.439536 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-11 00:56:16.439549 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-11 00:56:16.439553 | orchestrator | 2026-03-11 00:56:16.439557 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2026-03-11 00:56:16.439561 | orchestrator | Wednesday 11 March 2026 00:53:17 +0000 (0:00:02.241) 0:07:38.935 ******* 2026-03-11 00:56:16.439564 | orchestrator | changed: [testbed-node-3] => (item=5) 2026-03-11 00:56:16.439568 | orchestrator | changed: [testbed-node-4] => (item=1) 2026-03-11 00:56:16.439572 | orchestrator | changed: [testbed-node-5] => (item=2) 2026-03-11 00:56:16.439575 | orchestrator | changed: [testbed-node-4] => (item=3) 2026-03-11 00:56:16.439579 | orchestrator | changed: [testbed-node-3] => (item=0) 2026-03-11 00:56:16.439583 | orchestrator | changed: [testbed-node-5] => (item=4) 2026-03-11 00:56:16.439586 | orchestrator | 2026-03-11 00:56:16.439590 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2026-03-11 00:56:16.439594 | orchestrator | Wednesday 11 March 2026 00:53:20 +0000 (0:00:03.782) 0:07:42.718 ******* 2026-03-11 00:56:16.439600 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439606 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.439612 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:16.439618 | orchestrator | 2026-03-11 00:56:16.439624 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2026-03-11 00:56:16.439629 | orchestrator | Wednesday 11 March 2026 00:53:23 +0000 (0:00:02.521) 0:07:45.240 ******* 2026-03-11 00:56:16.439635 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439641 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.439647 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2026-03-11 00:56:16.439653 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:16.439660 | orchestrator | 2026-03-11 00:56:16.439666 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2026-03-11 00:56:16.439672 | orchestrator | Wednesday 11 March 2026 00:53:35 +0000 (0:00:12.479) 0:07:57.720 ******* 2026-03-11 00:56:16.439678 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439685 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.439691 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.439697 | orchestrator | 2026-03-11 00:56:16.439704 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-11 00:56:16.439752 | orchestrator | Wednesday 11 March 2026 00:53:36 +0000 (0:00:00.919) 0:07:58.639 ******* 2026-03-11 00:56:16.439759 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439764 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.439770 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.439776 | orchestrator | 2026-03-11 00:56:16.439782 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2026-03-11 00:56:16.439788 | orchestrator | Wednesday 11 March 2026 00:53:37 +0000 (0:00:00.302) 0:07:58.942 ******* 2026-03-11 00:56:16.439794 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.439800 | orchestrator | 2026-03-11 00:56:16.439807 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2026-03-11 00:56:16.439813 | orchestrator | Wednesday 11 March 2026 00:53:37 +0000 (0:00:00.460) 0:07:59.403 ******* 2026-03-11 00:56:16.439819 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.439826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.439837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.439844 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439849 | orchestrator | 2026-03-11 00:56:16.439853 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2026-03-11 00:56:16.439856 | orchestrator | Wednesday 11 March 2026 00:53:38 +0000 (0:00:00.925) 0:08:00.328 ******* 2026-03-11 00:56:16.439866 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439870 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.439876 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.439882 | orchestrator | 2026-03-11 00:56:16.439888 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2026-03-11 00:56:16.439894 | orchestrator | Wednesday 11 March 2026 00:53:38 +0000 (0:00:00.330) 0:08:00.659 ******* 2026-03-11 00:56:16.439900 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439905 | orchestrator | 2026-03-11 00:56:16.439912 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2026-03-11 00:56:16.439918 | orchestrator | Wednesday 11 March 2026 00:53:39 +0000 (0:00:00.201) 0:08:00.861 ******* 2026-03-11 00:56:16.439924 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439931 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.439936 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.439942 | orchestrator | 2026-03-11 00:56:16.439950 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2026-03-11 00:56:16.439954 | orchestrator | Wednesday 11 March 2026 00:53:39 +0000 (0:00:00.310) 0:08:01.171 ******* 2026-03-11 00:56:16.439958 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439961 | orchestrator | 2026-03-11 00:56:16.439965 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2026-03-11 00:56:16.439969 | orchestrator | Wednesday 11 March 2026 00:53:39 +0000 (0:00:00.236) 0:08:01.407 ******* 2026-03-11 00:56:16.439972 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439976 | orchestrator | 2026-03-11 00:56:16.439980 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2026-03-11 00:56:16.439983 | orchestrator | Wednesday 11 March 2026 00:53:39 +0000 (0:00:00.209) 0:08:01.617 ******* 2026-03-11 00:56:16.439987 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.439991 | orchestrator | 2026-03-11 00:56:16.439994 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2026-03-11 00:56:16.439998 | orchestrator | Wednesday 11 March 2026 00:53:39 +0000 (0:00:00.118) 0:08:01.735 ******* 2026-03-11 00:56:16.440002 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440005 | orchestrator | 2026-03-11 00:56:16.440015 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2026-03-11 00:56:16.440019 | orchestrator | Wednesday 11 March 2026 00:53:40 +0000 (0:00:00.214) 0:08:01.950 ******* 2026-03-11 00:56:16.440022 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440026 | orchestrator | 2026-03-11 00:56:16.440030 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2026-03-11 00:56:16.440033 | orchestrator | Wednesday 11 March 2026 00:53:40 +0000 (0:00:00.769) 0:08:02.719 ******* 2026-03-11 00:56:16.440037 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.440041 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.440044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.440048 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440052 | orchestrator | 2026-03-11 00:56:16.440055 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2026-03-11 00:56:16.440059 | orchestrator | Wednesday 11 March 2026 00:53:41 +0000 (0:00:00.394) 0:08:03.113 ******* 2026-03-11 00:56:16.440063 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440067 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.440070 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.440074 | orchestrator | 2026-03-11 00:56:16.440077 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2026-03-11 00:56:16.440081 | orchestrator | Wednesday 11 March 2026 00:53:41 +0000 (0:00:00.299) 0:08:03.413 ******* 2026-03-11 00:56:16.440085 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440088 | orchestrator | 2026-03-11 00:56:16.440092 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2026-03-11 00:56:16.440096 | orchestrator | Wednesday 11 March 2026 00:53:41 +0000 (0:00:00.232) 0:08:03.646 ******* 2026-03-11 00:56:16.440104 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440108 | orchestrator | 2026-03-11 00:56:16.440111 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2026-03-11 00:56:16.440115 | orchestrator | 2026-03-11 00:56:16.440119 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:16.440122 | orchestrator | Wednesday 11 March 2026 00:53:42 +0000 (0:00:00.642) 0:08:04.288 ******* 2026-03-11 00:56:16.440126 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.440132 | orchestrator | 2026-03-11 00:56:16.440136 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:16.440140 | orchestrator | Wednesday 11 March 2026 00:53:43 +0000 (0:00:01.243) 0:08:05.532 ******* 2026-03-11 00:56:16.440144 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.440147 | orchestrator | 2026-03-11 00:56:16.440151 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:16.440155 | orchestrator | Wednesday 11 March 2026 00:53:44 +0000 (0:00:01.149) 0:08:06.681 ******* 2026-03-11 00:56:16.440158 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440162 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.440166 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.440169 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.440173 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.440177 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.440181 | orchestrator | 2026-03-11 00:56:16.440188 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:16.440192 | orchestrator | Wednesday 11 March 2026 00:53:46 +0000 (0:00:01.247) 0:08:07.929 ******* 2026-03-11 00:56:16.440196 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.440199 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.440203 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.440207 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.440210 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.440214 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.440218 | orchestrator | 2026-03-11 00:56:16.440222 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:16.440225 | orchestrator | Wednesday 11 March 2026 00:53:46 +0000 (0:00:00.685) 0:08:08.615 ******* 2026-03-11 00:56:16.440229 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.440233 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.440236 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.440240 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.440244 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.440247 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.440251 | orchestrator | 2026-03-11 00:56:16.440254 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:16.440258 | orchestrator | Wednesday 11 March 2026 00:53:47 +0000 (0:00:00.936) 0:08:09.552 ******* 2026-03-11 00:56:16.440262 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.440265 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.440269 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.440273 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.440276 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.440280 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.440284 | orchestrator | 2026-03-11 00:56:16.440287 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:16.440291 | orchestrator | Wednesday 11 March 2026 00:53:48 +0000 (0:00:00.670) 0:08:10.223 ******* 2026-03-11 00:56:16.440295 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440298 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.440306 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.440310 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.440314 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.440317 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.440321 | orchestrator | 2026-03-11 00:56:16.440325 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:16.440328 | orchestrator | Wednesday 11 March 2026 00:53:49 +0000 (0:00:01.059) 0:08:11.282 ******* 2026-03-11 00:56:16.440332 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440336 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.440342 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.440346 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.440350 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.440354 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.440357 | orchestrator | 2026-03-11 00:56:16.440361 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:16.440365 | orchestrator | Wednesday 11 March 2026 00:53:49 +0000 (0:00:00.488) 0:08:11.770 ******* 2026-03-11 00:56:16.440368 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440372 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.440376 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.440379 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.440383 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.440387 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.440390 | orchestrator | 2026-03-11 00:56:16.440394 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:16.440398 | orchestrator | Wednesday 11 March 2026 00:53:50 +0000 (0:00:00.716) 0:08:12.487 ******* 2026-03-11 00:56:16.440402 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.440405 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.440409 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.440413 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.440416 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.440420 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.440424 | orchestrator | 2026-03-11 00:56:16.440427 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:16.440431 | orchestrator | Wednesday 11 March 2026 00:53:51 +0000 (0:00:01.051) 0:08:13.539 ******* 2026-03-11 00:56:16.440435 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.440439 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.440442 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.440446 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.440450 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.440453 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.440457 | orchestrator | 2026-03-11 00:56:16.440461 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:16.440464 | orchestrator | Wednesday 11 March 2026 00:53:52 +0000 (0:00:01.196) 0:08:14.735 ******* 2026-03-11 00:56:16.440468 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440472 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.440476 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.440479 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.440483 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.440486 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.440490 | orchestrator | 2026-03-11 00:56:16.440494 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:16.440498 | orchestrator | Wednesday 11 March 2026 00:53:53 +0000 (0:00:00.507) 0:08:15.243 ******* 2026-03-11 00:56:16.440502 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440507 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.440513 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.440523 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.440530 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.440537 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.440547 | orchestrator | 2026-03-11 00:56:16.440554 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:16.440560 | orchestrator | Wednesday 11 March 2026 00:53:54 +0000 (0:00:00.667) 0:08:15.910 ******* 2026-03-11 00:56:16.440565 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.440571 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.440576 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.440582 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.440588 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.440594 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.440600 | orchestrator | 2026-03-11 00:56:16.440610 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:16.440616 | orchestrator | Wednesday 11 March 2026 00:53:54 +0000 (0:00:00.526) 0:08:16.436 ******* 2026-03-11 00:56:16.440623 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.440629 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.440634 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.440640 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.440646 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.440652 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.440657 | orchestrator | 2026-03-11 00:56:16.440663 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:16.440670 | orchestrator | Wednesday 11 March 2026 00:53:55 +0000 (0:00:00.678) 0:08:17.114 ******* 2026-03-11 00:56:16.440676 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.440682 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.440688 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.440693 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.440699 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.440705 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.440731 | orchestrator | 2026-03-11 00:56:16.440736 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:16.440740 | orchestrator | Wednesday 11 March 2026 00:53:55 +0000 (0:00:00.540) 0:08:17.654 ******* 2026-03-11 00:56:16.440744 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440747 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.440751 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.440755 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.440758 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.440762 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.440766 | orchestrator | 2026-03-11 00:56:16.440769 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:16.440773 | orchestrator | Wednesday 11 March 2026 00:53:56 +0000 (0:00:00.634) 0:08:18.289 ******* 2026-03-11 00:56:16.440777 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440780 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.440784 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.440788 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:56:16.440791 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:56:16.440795 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:56:16.440799 | orchestrator | 2026-03-11 00:56:16.440802 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:16.440806 | orchestrator | Wednesday 11 March 2026 00:53:56 +0000 (0:00:00.490) 0:08:18.780 ******* 2026-03-11 00:56:16.440810 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.440822 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.440830 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.440838 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.440844 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.440850 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.440855 | orchestrator | 2026-03-11 00:56:16.440862 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:16.440868 | orchestrator | Wednesday 11 March 2026 00:53:57 +0000 (0:00:00.750) 0:08:19.530 ******* 2026-03-11 00:56:16.440873 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.440886 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.440892 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.440898 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.440903 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.440909 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.440915 | orchestrator | 2026-03-11 00:56:16.440921 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:16.440926 | orchestrator | Wednesday 11 March 2026 00:53:58 +0000 (0:00:00.557) 0:08:20.087 ******* 2026-03-11 00:56:16.440931 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.440937 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.440943 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.440948 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.440953 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.440959 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.440964 | orchestrator | 2026-03-11 00:56:16.440970 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2026-03-11 00:56:16.440976 | orchestrator | Wednesday 11 March 2026 00:53:59 +0000 (0:00:01.476) 0:08:21.564 ******* 2026-03-11 00:56:16.440982 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:16.440988 | orchestrator | 2026-03-11 00:56:16.440995 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2026-03-11 00:56:16.441001 | orchestrator | Wednesday 11 March 2026 00:54:03 +0000 (0:00:03.510) 0:08:25.074 ******* 2026-03-11 00:56:16.441007 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:16.441013 | orchestrator | 2026-03-11 00:56:16.441018 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2026-03-11 00:56:16.441022 | orchestrator | Wednesday 11 March 2026 00:54:05 +0000 (0:00:01.903) 0:08:26.977 ******* 2026-03-11 00:56:16.441026 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.441029 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.441033 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.441037 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.441040 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.441044 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.441048 | orchestrator | 2026-03-11 00:56:16.441051 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2026-03-11 00:56:16.441055 | orchestrator | Wednesday 11 March 2026 00:54:07 +0000 (0:00:01.925) 0:08:28.903 ******* 2026-03-11 00:56:16.441059 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.441062 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.441066 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.441070 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.441074 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.441077 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.441081 | orchestrator | 2026-03-11 00:56:16.441085 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2026-03-11 00:56:16.441088 | orchestrator | Wednesday 11 March 2026 00:54:08 +0000 (0:00:01.036) 0:08:29.940 ******* 2026-03-11 00:56:16.441097 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.441102 | orchestrator | 2026-03-11 00:56:16.441106 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2026-03-11 00:56:16.441110 | orchestrator | Wednesday 11 March 2026 00:54:09 +0000 (0:00:01.206) 0:08:31.146 ******* 2026-03-11 00:56:16.441113 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.441117 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.441121 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.441125 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.441128 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.441132 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.441136 | orchestrator | 2026-03-11 00:56:16.441139 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2026-03-11 00:56:16.441147 | orchestrator | Wednesday 11 March 2026 00:54:11 +0000 (0:00:01.762) 0:08:32.909 ******* 2026-03-11 00:56:16.441151 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.441155 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.441158 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.441162 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.441166 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.441169 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.441173 | orchestrator | 2026-03-11 00:56:16.441177 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2026-03-11 00:56:16.441181 | orchestrator | Wednesday 11 March 2026 00:54:14 +0000 (0:00:03.222) 0:08:36.131 ******* 2026-03-11 00:56:16.441184 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:56:16.441188 | orchestrator | 2026-03-11 00:56:16.441192 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2026-03-11 00:56:16.441195 | orchestrator | Wednesday 11 March 2026 00:54:15 +0000 (0:00:01.273) 0:08:37.404 ******* 2026-03-11 00:56:16.441199 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.441203 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.441206 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.441210 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.441214 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.441217 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.441221 | orchestrator | 2026-03-11 00:56:16.441225 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2026-03-11 00:56:16.441229 | orchestrator | Wednesday 11 March 2026 00:54:16 +0000 (0:00:00.817) 0:08:38.222 ******* 2026-03-11 00:56:16.441232 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.441240 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.441243 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.441247 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:56:16.441251 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:56:16.441254 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:56:16.441258 | orchestrator | 2026-03-11 00:56:16.441262 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2026-03-11 00:56:16.441266 | orchestrator | Wednesday 11 March 2026 00:54:18 +0000 (0:00:02.086) 0:08:40.309 ******* 2026-03-11 00:56:16.441269 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.441273 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.441277 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.441281 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:56:16.441284 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:56:16.441288 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:56:16.441292 | orchestrator | 2026-03-11 00:56:16.441295 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2026-03-11 00:56:16.441299 | orchestrator | 2026-03-11 00:56:16.441303 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:16.441307 | orchestrator | Wednesday 11 March 2026 00:54:19 +0000 (0:00:01.071) 0:08:41.380 ******* 2026-03-11 00:56:16.441314 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.441323 | orchestrator | 2026-03-11 00:56:16.441330 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:16.441336 | orchestrator | Wednesday 11 March 2026 00:54:20 +0000 (0:00:00.519) 0:08:41.899 ******* 2026-03-11 00:56:16.441342 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.441347 | orchestrator | 2026-03-11 00:56:16.441353 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:16.441359 | orchestrator | Wednesday 11 March 2026 00:54:20 +0000 (0:00:00.771) 0:08:42.670 ******* 2026-03-11 00:56:16.441389 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.441396 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.441402 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.441409 | orchestrator | 2026-03-11 00:56:16.441414 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:16.441421 | orchestrator | Wednesday 11 March 2026 00:54:21 +0000 (0:00:00.315) 0:08:42.986 ******* 2026-03-11 00:56:16.441427 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.441433 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.441440 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.441444 | orchestrator | 2026-03-11 00:56:16.441448 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:16.441452 | orchestrator | Wednesday 11 March 2026 00:54:21 +0000 (0:00:00.690) 0:08:43.677 ******* 2026-03-11 00:56:16.441456 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.441460 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.441463 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.441467 | orchestrator | 2026-03-11 00:56:16.441471 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:16.441474 | orchestrator | Wednesday 11 March 2026 00:54:22 +0000 (0:00:01.089) 0:08:44.766 ******* 2026-03-11 00:56:16.441478 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.441482 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.441485 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.441489 | orchestrator | 2026-03-11 00:56:16.441492 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:16.441500 | orchestrator | Wednesday 11 March 2026 00:54:23 +0000 (0:00:00.790) 0:08:45.557 ******* 2026-03-11 00:56:16.441504 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.441507 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.441511 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.441515 | orchestrator | 2026-03-11 00:56:16.441518 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:16.441522 | orchestrator | Wednesday 11 March 2026 00:54:24 +0000 (0:00:00.302) 0:08:45.859 ******* 2026-03-11 00:56:16.441526 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.441530 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.441534 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.441538 | orchestrator | 2026-03-11 00:56:16.441544 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:16.441549 | orchestrator | Wednesday 11 March 2026 00:54:24 +0000 (0:00:00.302) 0:08:46.162 ******* 2026-03-11 00:56:16.441555 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.441561 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.441566 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.441572 | orchestrator | 2026-03-11 00:56:16.441578 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:16.441584 | orchestrator | Wednesday 11 March 2026 00:54:24 +0000 (0:00:00.608) 0:08:46.770 ******* 2026-03-11 00:56:16.441589 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.441595 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.441600 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.441607 | orchestrator | 2026-03-11 00:56:16.441613 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:16.441620 | orchestrator | Wednesday 11 March 2026 00:54:25 +0000 (0:00:00.767) 0:08:47.537 ******* 2026-03-11 00:56:16.441627 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.441634 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.441640 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.441647 | orchestrator | 2026-03-11 00:56:16.441652 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:16.441658 | orchestrator | Wednesday 11 March 2026 00:54:26 +0000 (0:00:00.716) 0:08:48.254 ******* 2026-03-11 00:56:16.441664 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.441670 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.441682 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.441688 | orchestrator | 2026-03-11 00:56:16.441693 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:16.441699 | orchestrator | Wednesday 11 March 2026 00:54:26 +0000 (0:00:00.292) 0:08:48.546 ******* 2026-03-11 00:56:16.441704 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.441732 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.441738 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.441744 | orchestrator | 2026-03-11 00:56:16.441750 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:16.441756 | orchestrator | Wednesday 11 March 2026 00:54:27 +0000 (0:00:00.584) 0:08:49.130 ******* 2026-03-11 00:56:16.441762 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.441769 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.441774 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.441780 | orchestrator | 2026-03-11 00:56:16.441786 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:16.441792 | orchestrator | Wednesday 11 March 2026 00:54:27 +0000 (0:00:00.338) 0:08:49.469 ******* 2026-03-11 00:56:16.441799 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.441805 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.441811 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.441817 | orchestrator | 2026-03-11 00:56:16.441823 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:16.441828 | orchestrator | Wednesday 11 March 2026 00:54:28 +0000 (0:00:00.343) 0:08:49.813 ******* 2026-03-11 00:56:16.441834 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.441840 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.441845 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.441851 | orchestrator | 2026-03-11 00:56:16.441856 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:16.441862 | orchestrator | Wednesday 11 March 2026 00:54:28 +0000 (0:00:00.322) 0:08:50.135 ******* 2026-03-11 00:56:16.441868 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.441874 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.441879 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.441885 | orchestrator | 2026-03-11 00:56:16.441891 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:16.441898 | orchestrator | Wednesday 11 March 2026 00:54:28 +0000 (0:00:00.582) 0:08:50.718 ******* 2026-03-11 00:56:16.441903 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.441909 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.441914 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.441920 | orchestrator | 2026-03-11 00:56:16.441925 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:16.441931 | orchestrator | Wednesday 11 March 2026 00:54:29 +0000 (0:00:00.318) 0:08:51.037 ******* 2026-03-11 00:56:16.441936 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.441941 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.441947 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.441954 | orchestrator | 2026-03-11 00:56:16.441960 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:16.441966 | orchestrator | Wednesday 11 March 2026 00:54:29 +0000 (0:00:00.305) 0:08:51.343 ******* 2026-03-11 00:56:16.441972 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.441978 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.441984 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.441989 | orchestrator | 2026-03-11 00:56:16.441995 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:16.442002 | orchestrator | Wednesday 11 March 2026 00:54:29 +0000 (0:00:00.384) 0:08:51.728 ******* 2026-03-11 00:56:16.442009 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.442064 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.442071 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.442078 | orchestrator | 2026-03-11 00:56:16.442092 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2026-03-11 00:56:16.442098 | orchestrator | Wednesday 11 March 2026 00:54:30 +0000 (0:00:00.876) 0:08:52.605 ******* 2026-03-11 00:56:16.442116 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.442123 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.442129 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2026-03-11 00:56:16.442136 | orchestrator | 2026-03-11 00:56:16.442142 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2026-03-11 00:56:16.442148 | orchestrator | Wednesday 11 March 2026 00:54:31 +0000 (0:00:00.356) 0:08:52.961 ******* 2026-03-11 00:56:16.442154 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:16.442160 | orchestrator | 2026-03-11 00:56:16.442165 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2026-03-11 00:56:16.442171 | orchestrator | Wednesday 11 March 2026 00:54:33 +0000 (0:00:02.073) 0:08:55.034 ******* 2026-03-11 00:56:16.442178 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2026-03-11 00:56:16.442186 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.442192 | orchestrator | 2026-03-11 00:56:16.442198 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2026-03-11 00:56:16.442204 | orchestrator | Wednesday 11 March 2026 00:54:33 +0000 (0:00:00.183) 0:08:55.217 ******* 2026-03-11 00:56:16.442212 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:56:16.442224 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:56:16.442230 | orchestrator | 2026-03-11 00:56:16.442236 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2026-03-11 00:56:16.442241 | orchestrator | Wednesday 11 March 2026 00:54:42 +0000 (0:00:08.733) 0:09:03.951 ******* 2026-03-11 00:56:16.442255 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 00:56:16.442261 | orchestrator | 2026-03-11 00:56:16.442266 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2026-03-11 00:56:16.442272 | orchestrator | Wednesday 11 March 2026 00:54:45 +0000 (0:00:03.587) 0:09:07.539 ******* 2026-03-11 00:56:16.442278 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.442285 | orchestrator | 2026-03-11 00:56:16.442291 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2026-03-11 00:56:16.442297 | orchestrator | Wednesday 11 March 2026 00:54:46 +0000 (0:00:00.467) 0:09:08.006 ******* 2026-03-11 00:56:16.442303 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-11 00:56:16.442308 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-11 00:56:16.442315 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2026-03-11 00:56:16.442321 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2026-03-11 00:56:16.442327 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2026-03-11 00:56:16.442333 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2026-03-11 00:56:16.442339 | orchestrator | 2026-03-11 00:56:16.442345 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2026-03-11 00:56:16.442352 | orchestrator | Wednesday 11 March 2026 00:54:47 +0000 (0:00:00.872) 0:09:08.878 ******* 2026-03-11 00:56:16.442366 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:16.442374 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-11 00:56:16.442381 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:56:16.442388 | orchestrator | 2026-03-11 00:56:16.442396 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2026-03-11 00:56:16.442403 | orchestrator | Wednesday 11 March 2026 00:54:49 +0000 (0:00:01.986) 0:09:10.865 ******* 2026-03-11 00:56:16.442411 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 00:56:16.442418 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-11 00:56:16.442425 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.442432 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 00:56:16.442439 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-11 00:56:16.442445 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.442451 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 00:56:16.442457 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-11 00:56:16.442464 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.442471 | orchestrator | 2026-03-11 00:56:16.442478 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2026-03-11 00:56:16.442485 | orchestrator | Wednesday 11 March 2026 00:54:50 +0000 (0:00:01.293) 0:09:12.158 ******* 2026-03-11 00:56:16.442492 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.442499 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.442506 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.442513 | orchestrator | 2026-03-11 00:56:16.442520 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2026-03-11 00:56:16.442527 | orchestrator | Wednesday 11 March 2026 00:54:52 +0000 (0:00:02.433) 0:09:14.592 ******* 2026-03-11 00:56:16.442539 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.442547 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.442554 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.442560 | orchestrator | 2026-03-11 00:56:16.442566 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2026-03-11 00:56:16.442572 | orchestrator | Wednesday 11 March 2026 00:54:53 +0000 (0:00:00.321) 0:09:14.914 ******* 2026-03-11 00:56:16.442578 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.442585 | orchestrator | 2026-03-11 00:56:16.442591 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2026-03-11 00:56:16.442598 | orchestrator | Wednesday 11 March 2026 00:54:53 +0000 (0:00:00.850) 0:09:15.764 ******* 2026-03-11 00:56:16.442605 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.442612 | orchestrator | 2026-03-11 00:56:16.442619 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2026-03-11 00:56:16.442626 | orchestrator | Wednesday 11 March 2026 00:54:54 +0000 (0:00:00.472) 0:09:16.236 ******* 2026-03-11 00:56:16.442633 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.442639 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.442645 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.442652 | orchestrator | 2026-03-11 00:56:16.442658 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2026-03-11 00:56:16.442665 | orchestrator | Wednesday 11 March 2026 00:54:55 +0000 (0:00:01.251) 0:09:17.488 ******* 2026-03-11 00:56:16.442672 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.442679 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.442686 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.442692 | orchestrator | 2026-03-11 00:56:16.442698 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2026-03-11 00:56:16.442706 | orchestrator | Wednesday 11 March 2026 00:54:57 +0000 (0:00:01.364) 0:09:18.853 ******* 2026-03-11 00:56:16.442743 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.442750 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.442757 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.442763 | orchestrator | 2026-03-11 00:56:16.442770 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2026-03-11 00:56:16.442778 | orchestrator | Wednesday 11 March 2026 00:54:58 +0000 (0:00:01.829) 0:09:20.683 ******* 2026-03-11 00:56:16.442784 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.442800 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.442807 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.442814 | orchestrator | 2026-03-11 00:56:16.442820 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2026-03-11 00:56:16.442827 | orchestrator | Wednesday 11 March 2026 00:55:00 +0000 (0:00:01.788) 0:09:22.472 ******* 2026-03-11 00:56:16.442833 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.442840 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.442847 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.442854 | orchestrator | 2026-03-11 00:56:16.442861 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-11 00:56:16.442868 | orchestrator | Wednesday 11 March 2026 00:55:02 +0000 (0:00:01.458) 0:09:23.931 ******* 2026-03-11 00:56:16.442876 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.442882 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.442890 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.442897 | orchestrator | 2026-03-11 00:56:16.442903 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2026-03-11 00:56:16.442909 | orchestrator | Wednesday 11 March 2026 00:55:02 +0000 (0:00:00.687) 0:09:24.618 ******* 2026-03-11 00:56:16.442915 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.442922 | orchestrator | 2026-03-11 00:56:16.442929 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2026-03-11 00:56:16.442935 | orchestrator | Wednesday 11 March 2026 00:55:04 +0000 (0:00:01.601) 0:09:26.220 ******* 2026-03-11 00:56:16.442941 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.442948 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.442955 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.442961 | orchestrator | 2026-03-11 00:56:16.442967 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2026-03-11 00:56:16.442974 | orchestrator | Wednesday 11 March 2026 00:55:04 +0000 (0:00:00.359) 0:09:26.579 ******* 2026-03-11 00:56:16.442980 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.442986 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.442992 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.442998 | orchestrator | 2026-03-11 00:56:16.443003 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2026-03-11 00:56:16.443009 | orchestrator | Wednesday 11 March 2026 00:55:05 +0000 (0:00:01.185) 0:09:27.765 ******* 2026-03-11 00:56:16.443016 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.443022 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.443028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.443035 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.443040 | orchestrator | 2026-03-11 00:56:16.443046 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2026-03-11 00:56:16.443052 | orchestrator | Wednesday 11 March 2026 00:55:06 +0000 (0:00:00.870) 0:09:28.636 ******* 2026-03-11 00:56:16.443058 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.443064 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.443070 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.443077 | orchestrator | 2026-03-11 00:56:16.443083 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-11 00:56:16.443089 | orchestrator | 2026-03-11 00:56:16.443096 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2026-03-11 00:56:16.443108 | orchestrator | Wednesday 11 March 2026 00:55:07 +0000 (0:00:00.826) 0:09:29.463 ******* 2026-03-11 00:56:16.443120 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.443128 | orchestrator | 2026-03-11 00:56:16.443134 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2026-03-11 00:56:16.443139 | orchestrator | Wednesday 11 March 2026 00:55:08 +0000 (0:00:00.501) 0:09:29.964 ******* 2026-03-11 00:56:16.443146 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.443152 | orchestrator | 2026-03-11 00:56:16.443159 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2026-03-11 00:56:16.443165 | orchestrator | Wednesday 11 March 2026 00:55:08 +0000 (0:00:00.723) 0:09:30.688 ******* 2026-03-11 00:56:16.443171 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.443178 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.443184 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.443190 | orchestrator | 2026-03-11 00:56:16.443197 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2026-03-11 00:56:16.443202 | orchestrator | Wednesday 11 March 2026 00:55:09 +0000 (0:00:00.316) 0:09:31.004 ******* 2026-03-11 00:56:16.443208 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.443214 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.443221 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.443227 | orchestrator | 2026-03-11 00:56:16.443234 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2026-03-11 00:56:16.443240 | orchestrator | Wednesday 11 March 2026 00:55:09 +0000 (0:00:00.635) 0:09:31.640 ******* 2026-03-11 00:56:16.443246 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.443252 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.443259 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.443265 | orchestrator | 2026-03-11 00:56:16.443271 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2026-03-11 00:56:16.443277 | orchestrator | Wednesday 11 March 2026 00:55:10 +0000 (0:00:01.094) 0:09:32.734 ******* 2026-03-11 00:56:16.443283 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.443290 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.443295 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.443301 | orchestrator | 2026-03-11 00:56:16.443307 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2026-03-11 00:56:16.443313 | orchestrator | Wednesday 11 March 2026 00:55:11 +0000 (0:00:00.808) 0:09:33.542 ******* 2026-03-11 00:56:16.443320 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.443325 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.443331 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.443338 | orchestrator | 2026-03-11 00:56:16.443349 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2026-03-11 00:56:16.443356 | orchestrator | Wednesday 11 March 2026 00:55:12 +0000 (0:00:00.307) 0:09:33.849 ******* 2026-03-11 00:56:16.443362 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.443369 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.443375 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.443381 | orchestrator | 2026-03-11 00:56:16.443387 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2026-03-11 00:56:16.443393 | orchestrator | Wednesday 11 March 2026 00:55:12 +0000 (0:00:00.306) 0:09:34.156 ******* 2026-03-11 00:56:16.443399 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.443405 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.443411 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.443416 | orchestrator | 2026-03-11 00:56:16.443422 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2026-03-11 00:56:16.443428 | orchestrator | Wednesday 11 March 2026 00:55:12 +0000 (0:00:00.297) 0:09:34.453 ******* 2026-03-11 00:56:16.443441 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.443447 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.443452 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.443458 | orchestrator | 2026-03-11 00:56:16.443463 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2026-03-11 00:56:16.443469 | orchestrator | Wednesday 11 March 2026 00:55:13 +0000 (0:00:01.097) 0:09:35.551 ******* 2026-03-11 00:56:16.443475 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.443480 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.443486 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.443491 | orchestrator | 2026-03-11 00:56:16.443497 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2026-03-11 00:56:16.443504 | orchestrator | Wednesday 11 March 2026 00:55:14 +0000 (0:00:00.792) 0:09:36.343 ******* 2026-03-11 00:56:16.443510 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.443518 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.443525 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.443531 | orchestrator | 2026-03-11 00:56:16.443537 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2026-03-11 00:56:16.443542 | orchestrator | Wednesday 11 March 2026 00:55:14 +0000 (0:00:00.308) 0:09:36.652 ******* 2026-03-11 00:56:16.443548 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.443554 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.443561 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.443567 | orchestrator | 2026-03-11 00:56:16.443572 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2026-03-11 00:56:16.443578 | orchestrator | Wednesday 11 March 2026 00:55:15 +0000 (0:00:00.300) 0:09:36.953 ******* 2026-03-11 00:56:16.443584 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.443590 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.443597 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.443603 | orchestrator | 2026-03-11 00:56:16.443608 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2026-03-11 00:56:16.443614 | orchestrator | Wednesday 11 March 2026 00:55:15 +0000 (0:00:00.687) 0:09:37.640 ******* 2026-03-11 00:56:16.443619 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.443625 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.443631 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.443637 | orchestrator | 2026-03-11 00:56:16.443642 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2026-03-11 00:56:16.443648 | orchestrator | Wednesday 11 March 2026 00:55:16 +0000 (0:00:00.348) 0:09:37.989 ******* 2026-03-11 00:56:16.443654 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.443659 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.443665 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.443670 | orchestrator | 2026-03-11 00:56:16.443677 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2026-03-11 00:56:16.443684 | orchestrator | Wednesday 11 March 2026 00:55:16 +0000 (0:00:00.371) 0:09:38.361 ******* 2026-03-11 00:56:16.443690 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.443697 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.443703 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.443760 | orchestrator | 2026-03-11 00:56:16.443769 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2026-03-11 00:56:16.443775 | orchestrator | Wednesday 11 March 2026 00:55:16 +0000 (0:00:00.284) 0:09:38.645 ******* 2026-03-11 00:56:16.443782 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.443828 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.443836 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.443842 | orchestrator | 2026-03-11 00:56:16.443848 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2026-03-11 00:56:16.443854 | orchestrator | Wednesday 11 March 2026 00:55:17 +0000 (0:00:00.607) 0:09:39.253 ******* 2026-03-11 00:56:16.443860 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.443867 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.443881 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.443887 | orchestrator | 2026-03-11 00:56:16.443892 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2026-03-11 00:56:16.443898 | orchestrator | Wednesday 11 March 2026 00:55:17 +0000 (0:00:00.316) 0:09:39.570 ******* 2026-03-11 00:56:16.443904 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.443910 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.443916 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.443921 | orchestrator | 2026-03-11 00:56:16.443927 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2026-03-11 00:56:16.443933 | orchestrator | Wednesday 11 March 2026 00:55:18 +0000 (0:00:00.352) 0:09:39.923 ******* 2026-03-11 00:56:16.443938 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.443944 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.443949 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.443955 | orchestrator | 2026-03-11 00:56:16.443960 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2026-03-11 00:56:16.443967 | orchestrator | Wednesday 11 March 2026 00:55:19 +0000 (0:00:00.923) 0:09:40.846 ******* 2026-03-11 00:56:16.443972 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.443979 | orchestrator | 2026-03-11 00:56:16.443986 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-11 00:56:16.444001 | orchestrator | Wednesday 11 March 2026 00:55:19 +0000 (0:00:00.593) 0:09:41.439 ******* 2026-03-11 00:56:16.444007 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:16.444014 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-11 00:56:16.444020 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:56:16.444025 | orchestrator | 2026-03-11 00:56:16.444031 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-11 00:56:16.444037 | orchestrator | Wednesday 11 March 2026 00:55:21 +0000 (0:00:02.258) 0:09:43.698 ******* 2026-03-11 00:56:16.444042 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 00:56:16.444049 | orchestrator | skipping: [testbed-node-3] => (item=None)  2026-03-11 00:56:16.444055 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.444061 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 00:56:16.444067 | orchestrator | skipping: [testbed-node-4] => (item=None)  2026-03-11 00:56:16.444073 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.444079 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 00:56:16.444085 | orchestrator | skipping: [testbed-node-5] => (item=None)  2026-03-11 00:56:16.444090 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.444096 | orchestrator | 2026-03-11 00:56:16.444102 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2026-03-11 00:56:16.444108 | orchestrator | Wednesday 11 March 2026 00:55:23 +0000 (0:00:01.599) 0:09:45.298 ******* 2026-03-11 00:56:16.444114 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.444121 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.444127 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.444132 | orchestrator | 2026-03-11 00:56:16.444138 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2026-03-11 00:56:16.444144 | orchestrator | Wednesday 11 March 2026 00:55:23 +0000 (0:00:00.351) 0:09:45.650 ******* 2026-03-11 00:56:16.444150 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.444156 | orchestrator | 2026-03-11 00:56:16.444162 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2026-03-11 00:56:16.444168 | orchestrator | Wednesday 11 March 2026 00:55:24 +0000 (0:00:00.588) 0:09:46.238 ******* 2026-03-11 00:56:16.444175 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.444189 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.444196 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.444202 | orchestrator | 2026-03-11 00:56:16.444208 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2026-03-11 00:56:16.444214 | orchestrator | Wednesday 11 March 2026 00:55:25 +0000 (0:00:01.352) 0:09:47.590 ******* 2026-03-11 00:56:16.444220 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:16.444231 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-11 00:56:16.444238 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:16.444245 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-11 00:56:16.444252 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:16.444258 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2026-03-11 00:56:16.444264 | orchestrator | 2026-03-11 00:56:16.444270 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2026-03-11 00:56:16.444276 | orchestrator | Wednesday 11 March 2026 00:55:30 +0000 (0:00:04.559) 0:09:52.149 ******* 2026-03-11 00:56:16.444283 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:16.444287 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:56:16.444291 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:16.444295 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:56:16.444298 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:56:16.444302 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:56:16.444306 | orchestrator | 2026-03-11 00:56:16.444309 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2026-03-11 00:56:16.444313 | orchestrator | Wednesday 11 March 2026 00:55:33 +0000 (0:00:03.098) 0:09:55.248 ******* 2026-03-11 00:56:16.444317 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 00:56:16.444320 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.444324 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 00:56:16.444328 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.444331 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 00:56:16.444335 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.444339 | orchestrator | 2026-03-11 00:56:16.444342 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2026-03-11 00:56:16.444353 | orchestrator | Wednesday 11 March 2026 00:55:34 +0000 (0:00:01.107) 0:09:56.356 ******* 2026-03-11 00:56:16.444357 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2026-03-11 00:56:16.444361 | orchestrator | 2026-03-11 00:56:16.444365 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2026-03-11 00:56:16.444369 | orchestrator | Wednesday 11 March 2026 00:55:34 +0000 (0:00:00.187) 0:09:56.543 ******* 2026-03-11 00:56:16.444373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:16.444378 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:16.444381 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:16.444389 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:16.444393 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:16.444397 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.444401 | orchestrator | 2026-03-11 00:56:16.444405 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2026-03-11 00:56:16.444409 | orchestrator | Wednesday 11 March 2026 00:55:35 +0000 (0:00:00.888) 0:09:57.431 ******* 2026-03-11 00:56:16.444412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:16.444416 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:16.444420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:16.444424 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:16.444430 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2026-03-11 00:56:16.444435 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.444441 | orchestrator | 2026-03-11 00:56:16.444447 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2026-03-11 00:56:16.444452 | orchestrator | Wednesday 11 March 2026 00:55:36 +0000 (0:00:00.527) 0:09:57.959 ******* 2026-03-11 00:56:16.444458 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-11 00:56:16.444467 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-11 00:56:16.444473 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-11 00:56:16.444479 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-11 00:56:16.444485 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2026-03-11 00:56:16.444490 | orchestrator | 2026-03-11 00:56:16.444495 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2026-03-11 00:56:16.444501 | orchestrator | Wednesday 11 March 2026 00:56:03 +0000 (0:00:27.739) 0:10:25.698 ******* 2026-03-11 00:56:16.444507 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.444513 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.444519 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.444525 | orchestrator | 2026-03-11 00:56:16.444531 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2026-03-11 00:56:16.444537 | orchestrator | Wednesday 11 March 2026 00:56:04 +0000 (0:00:00.314) 0:10:26.013 ******* 2026-03-11 00:56:16.444542 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.444548 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.444554 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.444560 | orchestrator | 2026-03-11 00:56:16.444566 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2026-03-11 00:56:16.444573 | orchestrator | Wednesday 11 March 2026 00:56:04 +0000 (0:00:00.326) 0:10:26.339 ******* 2026-03-11 00:56:16.444579 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.444589 | orchestrator | 2026-03-11 00:56:16.444593 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2026-03-11 00:56:16.444597 | orchestrator | Wednesday 11 March 2026 00:56:05 +0000 (0:00:00.746) 0:10:27.086 ******* 2026-03-11 00:56:16.444601 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.444604 | orchestrator | 2026-03-11 00:56:16.444612 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2026-03-11 00:56:16.444616 | orchestrator | Wednesday 11 March 2026 00:56:05 +0000 (0:00:00.518) 0:10:27.604 ******* 2026-03-11 00:56:16.444620 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.444624 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.444627 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.444631 | orchestrator | 2026-03-11 00:56:16.444635 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2026-03-11 00:56:16.444639 | orchestrator | Wednesday 11 March 2026 00:56:07 +0000 (0:00:01.295) 0:10:28.900 ******* 2026-03-11 00:56:16.444642 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.444646 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.444650 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.444653 | orchestrator | 2026-03-11 00:56:16.444657 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2026-03-11 00:56:16.444661 | orchestrator | Wednesday 11 March 2026 00:56:08 +0000 (0:00:01.507) 0:10:30.407 ******* 2026-03-11 00:56:16.444665 | orchestrator | changed: [testbed-node-4] 2026-03-11 00:56:16.444668 | orchestrator | changed: [testbed-node-3] 2026-03-11 00:56:16.444672 | orchestrator | changed: [testbed-node-5] 2026-03-11 00:56:16.444676 | orchestrator | 2026-03-11 00:56:16.444680 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2026-03-11 00:56:16.444686 | orchestrator | Wednesday 11 March 2026 00:56:10 +0000 (0:00:01.763) 0:10:32.171 ******* 2026-03-11 00:56:16.444691 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.444697 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.444702 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2026-03-11 00:56:16.444725 | orchestrator | 2026-03-11 00:56:16.444732 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2026-03-11 00:56:16.444739 | orchestrator | Wednesday 11 March 2026 00:56:13 +0000 (0:00:02.655) 0:10:34.826 ******* 2026-03-11 00:56:16.444746 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.444752 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.444758 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.444763 | orchestrator | 2026-03-11 00:56:16.444769 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2026-03-11 00:56:16.444775 | orchestrator | Wednesday 11 March 2026 00:56:13 +0000 (0:00:00.305) 0:10:35.132 ******* 2026-03-11 00:56:16.444781 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:56:16.444787 | orchestrator | 2026-03-11 00:56:16.444794 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2026-03-11 00:56:16.444800 | orchestrator | Wednesday 11 March 2026 00:56:13 +0000 (0:00:00.467) 0:10:35.600 ******* 2026-03-11 00:56:16.444806 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.444813 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.444817 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.444821 | orchestrator | 2026-03-11 00:56:16.444824 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2026-03-11 00:56:16.444828 | orchestrator | Wednesday 11 March 2026 00:56:14 +0000 (0:00:00.506) 0:10:36.106 ******* 2026-03-11 00:56:16.444836 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.444850 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:56:16.444856 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:56:16.444863 | orchestrator | 2026-03-11 00:56:16.444869 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2026-03-11 00:56:16.444875 | orchestrator | Wednesday 11 March 2026 00:56:14 +0000 (0:00:00.317) 0:10:36.424 ******* 2026-03-11 00:56:16.444881 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:56:16.444886 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:56:16.444894 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:56:16.444897 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:56:16.444901 | orchestrator | 2026-03-11 00:56:16.444905 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2026-03-11 00:56:16.444909 | orchestrator | Wednesday 11 March 2026 00:56:15 +0000 (0:00:00.569) 0:10:36.993 ******* 2026-03-11 00:56:16.444912 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:56:16.444916 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:56:16.444920 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:56:16.444924 | orchestrator | 2026-03-11 00:56:16.444927 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:56:16.444931 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2026-03-11 00:56:16.444936 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2026-03-11 00:56:16.444940 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2026-03-11 00:56:16.444943 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2026-03-11 00:56:16.444947 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2026-03-11 00:56:16.444954 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2026-03-11 00:56:16.444958 | orchestrator | 2026-03-11 00:56:16.444962 | orchestrator | 2026-03-11 00:56:16.444965 | orchestrator | 2026-03-11 00:56:16.444969 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:56:16.444973 | orchestrator | Wednesday 11 March 2026 00:56:15 +0000 (0:00:00.249) 0:10:37.243 ******* 2026-03-11 00:56:16.444977 | orchestrator | =============================================================================== 2026-03-11 00:56:16.444980 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 60.42s 2026-03-11 00:56:16.444984 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 39.92s 2026-03-11 00:56:16.444988 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.19s 2026-03-11 00:56:16.444992 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 27.74s 2026-03-11 00:56:16.444997 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.06s 2026-03-11 00:56:16.445002 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 12.48s 2026-03-11 00:56:16.445008 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node --------------------- 9.99s 2026-03-11 00:56:16.445014 | orchestrator | ceph-mon : Fetch ceph initial keys -------------------------------------- 9.58s 2026-03-11 00:56:16.445020 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.73s 2026-03-11 00:56:16.445026 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.74s 2026-03-11 00:56:16.445032 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 6.47s 2026-03-11 00:56:16.445043 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 5.09s 2026-03-11 00:56:16.445050 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.56s 2026-03-11 00:56:16.445054 | orchestrator | ceph-osd : Apply operating system tuning -------------------------------- 4.51s 2026-03-11 00:56:16.445058 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.78s 2026-03-11 00:56:16.445061 | orchestrator | ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created --- 3.62s 2026-03-11 00:56:16.445065 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.59s 2026-03-11 00:56:16.445069 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.51s 2026-03-11 00:56:16.445072 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 3.28s 2026-03-11 00:56:16.445076 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.22s 2026-03-11 00:56:16.445080 | orchestrator | 2026-03-11 00:56:16 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:16.445084 | orchestrator | 2026-03-11 00:56:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:19.475368 | orchestrator | 2026-03-11 00:56:19 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:19.477114 | orchestrator | 2026-03-11 00:56:19 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:19.478957 | orchestrator | 2026-03-11 00:56:19 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:19.479921 | orchestrator | 2026-03-11 00:56:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:22.535597 | orchestrator | 2026-03-11 00:56:22 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:22.536144 | orchestrator | 2026-03-11 00:56:22 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:22.538242 | orchestrator | 2026-03-11 00:56:22 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:22.538284 | orchestrator | 2026-03-11 00:56:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:25.589080 | orchestrator | 2026-03-11 00:56:25 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:25.590321 | orchestrator | 2026-03-11 00:56:25 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:25.592014 | orchestrator | 2026-03-11 00:56:25 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:25.592133 | orchestrator | 2026-03-11 00:56:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:28.636292 | orchestrator | 2026-03-11 00:56:28 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:28.638065 | orchestrator | 2026-03-11 00:56:28 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:28.639459 | orchestrator | 2026-03-11 00:56:28 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:28.639502 | orchestrator | 2026-03-11 00:56:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:31.686098 | orchestrator | 2026-03-11 00:56:31 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:31.690145 | orchestrator | 2026-03-11 00:56:31 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:31.691578 | orchestrator | 2026-03-11 00:56:31 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:31.691810 | orchestrator | 2026-03-11 00:56:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:34.741331 | orchestrator | 2026-03-11 00:56:34 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:34.743682 | orchestrator | 2026-03-11 00:56:34 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:34.745770 | orchestrator | 2026-03-11 00:56:34 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:34.745899 | orchestrator | 2026-03-11 00:56:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:37.788683 | orchestrator | 2026-03-11 00:56:37 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:37.790589 | orchestrator | 2026-03-11 00:56:37 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:37.792745 | orchestrator | 2026-03-11 00:56:37 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:37.792777 | orchestrator | 2026-03-11 00:56:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:40.827261 | orchestrator | 2026-03-11 00:56:40 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:40.828376 | orchestrator | 2026-03-11 00:56:40 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:40.829561 | orchestrator | 2026-03-11 00:56:40 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:40.829888 | orchestrator | 2026-03-11 00:56:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:43.867257 | orchestrator | 2026-03-11 00:56:43 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:43.868277 | orchestrator | 2026-03-11 00:56:43 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:43.870138 | orchestrator | 2026-03-11 00:56:43 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:43.870170 | orchestrator | 2026-03-11 00:56:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:46.917933 | orchestrator | 2026-03-11 00:56:46 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:46.919415 | orchestrator | 2026-03-11 00:56:46 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:46.921349 | orchestrator | 2026-03-11 00:56:46 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:46.921397 | orchestrator | 2026-03-11 00:56:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:49.972551 | orchestrator | 2026-03-11 00:56:49 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:49.974769 | orchestrator | 2026-03-11 00:56:49 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:49.976462 | orchestrator | 2026-03-11 00:56:49 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:49.976725 | orchestrator | 2026-03-11 00:56:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:53.023976 | orchestrator | 2026-03-11 00:56:53 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:53.024950 | orchestrator | 2026-03-11 00:56:53 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:53.026109 | orchestrator | 2026-03-11 00:56:53 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:53.027421 | orchestrator | 2026-03-11 00:56:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:56.072315 | orchestrator | 2026-03-11 00:56:56 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:56.073916 | orchestrator | 2026-03-11 00:56:56 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:56.075263 | orchestrator | 2026-03-11 00:56:56 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:56.075352 | orchestrator | 2026-03-11 00:56:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:56:59.112158 | orchestrator | 2026-03-11 00:56:59 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:56:59.113341 | orchestrator | 2026-03-11 00:56:59 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:56:59.114847 | orchestrator | 2026-03-11 00:56:59 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state STARTED 2026-03-11 00:56:59.114886 | orchestrator | 2026-03-11 00:56:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:02.154262 | orchestrator | 2026-03-11 00:57:02 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:57:02.156268 | orchestrator | 2026-03-11 00:57:02 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:02.158638 | orchestrator | 2026-03-11 00:57:02 | INFO  | Task 3d0f595b-e22a-4b1d-a54f-dc23ee65bbad is in state SUCCESS 2026-03-11 00:57:02.158914 | orchestrator | 2026-03-11 00:57:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:02.160091 | orchestrator | 2026-03-11 00:57:02.160119 | orchestrator | 2026-03-11 00:57:02.160125 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:57:02.160129 | orchestrator | 2026-03-11 00:57:02.160133 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:57:02.160137 | orchestrator | Wednesday 11 March 2026 00:54:29 +0000 (0:00:00.284) 0:00:00.284 ******* 2026-03-11 00:57:02.160141 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:02.160146 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:02.160150 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:02.160153 | orchestrator | 2026-03-11 00:57:02.160157 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:57:02.160161 | orchestrator | Wednesday 11 March 2026 00:54:30 +0000 (0:00:00.386) 0:00:00.670 ******* 2026-03-11 00:57:02.160165 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2026-03-11 00:57:02.160169 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2026-03-11 00:57:02.160173 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2026-03-11 00:57:02.160179 | orchestrator | 2026-03-11 00:57:02.160183 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2026-03-11 00:57:02.160187 | orchestrator | 2026-03-11 00:57:02.160190 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-11 00:57:02.160194 | orchestrator | Wednesday 11 March 2026 00:54:30 +0000 (0:00:00.505) 0:00:01.176 ******* 2026-03-11 00:57:02.160198 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:57:02.160202 | orchestrator | 2026-03-11 00:57:02.160206 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2026-03-11 00:57:02.160209 | orchestrator | Wednesday 11 March 2026 00:54:31 +0000 (0:00:00.504) 0:00:01.680 ******* 2026-03-11 00:57:02.160213 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-11 00:57:02.160217 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-11 00:57:02.160220 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2026-03-11 00:57:02.160224 | orchestrator | 2026-03-11 00:57:02.160228 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2026-03-11 00:57:02.160252 | orchestrator | Wednesday 11 March 2026 00:54:31 +0000 (0:00:00.669) 0:00:02.349 ******* 2026-03-11 00:57:02.160259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:02.160264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:02.160274 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:02.160280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:02.160286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:02.160294 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:02.160298 | orchestrator | 2026-03-11 00:57:02.160302 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-11 00:57:02.160306 | orchestrator | Wednesday 11 March 2026 00:54:33 +0000 (0:00:01.495) 0:00:03.844 ******* 2026-03-11 00:57:02.160310 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:57:02.160314 | orchestrator | 2026-03-11 00:57:02.160317 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2026-03-11 00:57:02.160321 | orchestrator | Wednesday 11 March 2026 00:54:33 +0000 (0:00:00.471) 0:00:04.316 ******* 2026-03-11 00:57:02.160328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:02.160332 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:02.160341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:02.160345 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:02.160352 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:02.160356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:02.160363 | orchestrator | 2026-03-11 00:57:02.160367 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2026-03-11 00:57:02.160371 | orchestrator | Wednesday 11 March 2026 00:54:36 +0000 (0:00:02.522) 0:00:06.839 ******* 2026-03-11 00:57:02.160377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:57:02.160381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:57:02.160385 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:02.160389 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:57:02.160396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:57:02.160403 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:02.160409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:57:02.160414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:57:02.160418 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:02.160421 | orchestrator | 2026-03-11 00:57:02.160425 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2026-03-11 00:57:02.160429 | orchestrator | Wednesday 11 March 2026 00:54:37 +0000 (0:00:01.105) 0:00:07.944 ******* 2026-03-11 00:57:02.160433 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:57:02.160440 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:57:02.160447 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:02.160453 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:57:02.160457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:57:02.160461 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:02.160465 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2026-03-11 00:57:02.160472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2026-03-11 00:57:02.160479 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:02.160483 | orchestrator | 2026-03-11 00:57:02.160486 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2026-03-11 00:57:02.160490 | orchestrator | Wednesday 11 March 2026 00:54:38 +0000 (0:00:00.892) 0:00:08.837 ******* 2026-03-11 00:57:02.160496 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:02.160500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:02.160504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:02.160511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:02.160521 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:02.160527 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:02.160531 | orchestrator | 2026-03-11 00:57:02.160535 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2026-03-11 00:57:02.160539 | orchestrator | Wednesday 11 March 2026 00:54:40 +0000 (0:00:02.452) 0:00:11.290 ******* 2026-03-11 00:57:02.160542 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:02.160546 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:02.160550 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:02.160554 | orchestrator | 2026-03-11 00:57:02.160557 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2026-03-11 00:57:02.160561 | orchestrator | Wednesday 11 March 2026 00:54:43 +0000 (0:00:02.283) 0:00:13.573 ******* 2026-03-11 00:57:02.160565 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:02.160569 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:02.160573 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:02.160576 | orchestrator | 2026-03-11 00:57:02.160580 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2026-03-11 00:57:02.160584 | orchestrator | Wednesday 11 March 2026 00:54:44 +0000 (0:00:01.729) 0:00:15.303 ******* 2026-03-11 00:57:02.160588 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:02.160597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:02.160601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2026-03-11 00:57:02.160607 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:02.160612 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:02.160621 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2026-03-11 00:57:02.160625 | orchestrator | 2026-03-11 00:57:02.160629 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-11 00:57:02.160633 | orchestrator | Wednesday 11 March 2026 00:54:46 +0000 (0:00:01.774) 0:00:17.077 ******* 2026-03-11 00:57:02.160637 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:02.160641 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:02.160644 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:02.160648 | orchestrator | 2026-03-11 00:57:02.160652 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-11 00:57:02.160656 | orchestrator | Wednesday 11 March 2026 00:54:46 +0000 (0:00:00.280) 0:00:17.358 ******* 2026-03-11 00:57:02.160659 | orchestrator | 2026-03-11 00:57:02.160663 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-11 00:57:02.160667 | orchestrator | Wednesday 11 March 2026 00:54:46 +0000 (0:00:00.056) 0:00:17.415 ******* 2026-03-11 00:57:02.160670 | orchestrator | 2026-03-11 00:57:02.160696 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2026-03-11 00:57:02.160703 | orchestrator | Wednesday 11 March 2026 00:54:46 +0000 (0:00:00.063) 0:00:17.478 ******* 2026-03-11 00:57:02.160710 | orchestrator | 2026-03-11 00:57:02.160716 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2026-03-11 00:57:02.160722 | orchestrator | Wednesday 11 March 2026 00:54:46 +0000 (0:00:00.059) 0:00:17.537 ******* 2026-03-11 00:57:02.160731 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:02.160737 | orchestrator | 2026-03-11 00:57:02.160743 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2026-03-11 00:57:02.160749 | orchestrator | Wednesday 11 March 2026 00:54:47 +0000 (0:00:00.474) 0:00:18.012 ******* 2026-03-11 00:57:02.160756 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:02.160763 | orchestrator | 2026-03-11 00:57:02.160769 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2026-03-11 00:57:02.160775 | orchestrator | Wednesday 11 March 2026 00:54:47 +0000 (0:00:00.176) 0:00:18.188 ******* 2026-03-11 00:57:02.160779 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:02.160784 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:02.160788 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:02.160793 | orchestrator | 2026-03-11 00:57:02.160797 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2026-03-11 00:57:02.160801 | orchestrator | Wednesday 11 March 2026 00:55:34 +0000 (0:00:46.637) 0:01:04.826 ******* 2026-03-11 00:57:02.160806 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:02.160810 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:02.160814 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:02.160819 | orchestrator | 2026-03-11 00:57:02.160823 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2026-03-11 00:57:02.160831 | orchestrator | Wednesday 11 March 2026 00:56:45 +0000 (0:01:11.357) 0:02:16.184 ******* 2026-03-11 00:57:02.160836 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:57:02.160841 | orchestrator | 2026-03-11 00:57:02.160845 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2026-03-11 00:57:02.160850 | orchestrator | Wednesday 11 March 2026 00:56:46 +0000 (0:00:00.704) 0:02:16.889 ******* 2026-03-11 00:57:02.160854 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:02.160858 | orchestrator | 2026-03-11 00:57:02.160863 | orchestrator | TASK [opensearch : Wait for OpenSearch cluster to become healthy] ************** 2026-03-11 00:57:02.160867 | orchestrator | Wednesday 11 March 2026 00:56:49 +0000 (0:00:03.219) 0:02:20.108 ******* 2026-03-11 00:57:02.160872 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:02.160876 | orchestrator | 2026-03-11 00:57:02.160881 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2026-03-11 00:57:02.160885 | orchestrator | Wednesday 11 March 2026 00:56:51 +0000 (0:00:02.087) 0:02:22.196 ******* 2026-03-11 00:57:02.160890 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:02.160894 | orchestrator | 2026-03-11 00:57:02.160899 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2026-03-11 00:57:02.160902 | orchestrator | Wednesday 11 March 2026 00:56:54 +0000 (0:00:02.404) 0:02:24.600 ******* 2026-03-11 00:57:02.160906 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:02.160910 | orchestrator | 2026-03-11 00:57:02.160914 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2026-03-11 00:57:02.160917 | orchestrator | Wednesday 11 March 2026 00:56:56 +0000 (0:00:02.535) 0:02:27.136 ******* 2026-03-11 00:57:02.160921 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:02.160925 | orchestrator | 2026-03-11 00:57:02.160928 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:57:02.160933 | orchestrator | testbed-node-0 : ok=19  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 00:57:02.160938 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:57:02.160945 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2026-03-11 00:57:02.160949 | orchestrator | 2026-03-11 00:57:02.160953 | orchestrator | 2026-03-11 00:57:02.160956 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:57:02.160960 | orchestrator | Wednesday 11 March 2026 00:56:59 +0000 (0:00:03.119) 0:02:30.256 ******* 2026-03-11 00:57:02.160964 | orchestrator | =============================================================================== 2026-03-11 00:57:02.160967 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 71.36s 2026-03-11 00:57:02.160971 | orchestrator | opensearch : Restart opensearch container ------------------------------ 46.64s 2026-03-11 00:57:02.160975 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 3.22s 2026-03-11 00:57:02.160981 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 3.12s 2026-03-11 00:57:02.160987 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.54s 2026-03-11 00:57:02.160996 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 2.52s 2026-03-11 00:57:02.161003 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.45s 2026-03-11 00:57:02.161009 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.40s 2026-03-11 00:57:02.161014 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.28s 2026-03-11 00:57:02.161021 | orchestrator | opensearch : Wait for OpenSearch cluster to become healthy -------------- 2.09s 2026-03-11 00:57:02.161032 | orchestrator | opensearch : Check opensearch containers -------------------------------- 1.77s 2026-03-11 00:57:02.161039 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.73s 2026-03-11 00:57:02.161045 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.50s 2026-03-11 00:57:02.161051 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.11s 2026-03-11 00:57:02.161058 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 0.89s 2026-03-11 00:57:02.161065 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.70s 2026-03-11 00:57:02.161069 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.67s 2026-03-11 00:57:02.161072 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.51s 2026-03-11 00:57:02.161076 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.50s 2026-03-11 00:57:02.161080 | orchestrator | opensearch : Disable shard allocation ----------------------------------- 0.48s 2026-03-11 00:57:05.208939 | orchestrator | 2026-03-11 00:57:05 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:57:05.210489 | orchestrator | 2026-03-11 00:57:05 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:05.210519 | orchestrator | 2026-03-11 00:57:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:08.249254 | orchestrator | 2026-03-11 00:57:08 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:57:08.252396 | orchestrator | 2026-03-11 00:57:08 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:08.252450 | orchestrator | 2026-03-11 00:57:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:11.294828 | orchestrator | 2026-03-11 00:57:11 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:57:11.296622 | orchestrator | 2026-03-11 00:57:11 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:11.296663 | orchestrator | 2026-03-11 00:57:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:14.342747 | orchestrator | 2026-03-11 00:57:14 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:57:14.343980 | orchestrator | 2026-03-11 00:57:14 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:14.344019 | orchestrator | 2026-03-11 00:57:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:17.389278 | orchestrator | 2026-03-11 00:57:17 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state STARTED 2026-03-11 00:57:17.389480 | orchestrator | 2026-03-11 00:57:17 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:17.389549 | orchestrator | 2026-03-11 00:57:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:20.439527 | orchestrator | 2026-03-11 00:57:20 | INFO  | Task ff6029ee-773e-46b9-a2a8-f1dcf071216d is in state SUCCESS 2026-03-11 00:57:20.441242 | orchestrator | 2026-03-11 00:57:20.441299 | orchestrator | 2026-03-11 00:57:20.441310 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2026-03-11 00:57:20.441318 | orchestrator | 2026-03-11 00:57:20.441325 | orchestrator | TASK [Inform the user about the following task] ******************************** 2026-03-11 00:57:20.441333 | orchestrator | Wednesday 11 March 2026 00:54:29 +0000 (0:00:00.093) 0:00:00.093 ******* 2026-03-11 00:57:20.441340 | orchestrator | ok: [localhost] => { 2026-03-11 00:57:20.441348 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2026-03-11 00:57:20.441353 | orchestrator | } 2026-03-11 00:57:20.441357 | orchestrator | 2026-03-11 00:57:20.441361 | orchestrator | TASK [Check MariaDB service] *************************************************** 2026-03-11 00:57:20.441379 | orchestrator | Wednesday 11 March 2026 00:54:29 +0000 (0:00:00.048) 0:00:00.141 ******* 2026-03-11 00:57:20.441384 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2026-03-11 00:57:20.441389 | orchestrator | ...ignoring 2026-03-11 00:57:20.441393 | orchestrator | 2026-03-11 00:57:20.441397 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2026-03-11 00:57:20.441402 | orchestrator | Wednesday 11 March 2026 00:54:32 +0000 (0:00:02.968) 0:00:03.110 ******* 2026-03-11 00:57:20.441406 | orchestrator | skipping: [localhost] 2026-03-11 00:57:20.441410 | orchestrator | 2026-03-11 00:57:20.441415 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2026-03-11 00:57:20.441419 | orchestrator | Wednesday 11 March 2026 00:54:32 +0000 (0:00:00.060) 0:00:03.171 ******* 2026-03-11 00:57:20.441423 | orchestrator | ok: [localhost] 2026-03-11 00:57:20.441427 | orchestrator | 2026-03-11 00:57:20.441431 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:57:20.441435 | orchestrator | 2026-03-11 00:57:20.441439 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:57:20.441443 | orchestrator | Wednesday 11 March 2026 00:54:32 +0000 (0:00:00.154) 0:00:03.325 ******* 2026-03-11 00:57:20.441447 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:20.441451 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:20.441455 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:20.441459 | orchestrator | 2026-03-11 00:57:20.441463 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:57:20.441470 | orchestrator | Wednesday 11 March 2026 00:54:33 +0000 (0:00:00.324) 0:00:03.650 ******* 2026-03-11 00:57:20.441476 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2026-03-11 00:57:20.441488 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2026-03-11 00:57:20.441496 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2026-03-11 00:57:20.441502 | orchestrator | 2026-03-11 00:57:20.441509 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2026-03-11 00:57:20.441567 | orchestrator | 2026-03-11 00:57:20.441583 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2026-03-11 00:57:20.441590 | orchestrator | Wednesday 11 March 2026 00:54:33 +0000 (0:00:00.489) 0:00:04.139 ******* 2026-03-11 00:57:20.441596 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2026-03-11 00:57:20.441603 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2026-03-11 00:57:20.441610 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2026-03-11 00:57:20.441617 | orchestrator | 2026-03-11 00:57:20.441623 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-11 00:57:20.441733 | orchestrator | Wednesday 11 March 2026 00:54:34 +0000 (0:00:00.329) 0:00:04.469 ******* 2026-03-11 00:57:20.441741 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:57:20.441748 | orchestrator | 2026-03-11 00:57:20.441752 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2026-03-11 00:57:20.441757 | orchestrator | Wednesday 11 March 2026 00:54:34 +0000 (0:00:00.468) 0:00:04.938 ******* 2026-03-11 00:57:20.441778 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:20.441939 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:20.441955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:20.441968 | orchestrator | 2026-03-11 00:57:20.441980 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2026-03-11 00:57:20.441988 | orchestrator | Wednesday 11 March 2026 00:54:37 +0000 (0:00:02.657) 0:00:07.595 ******* 2026-03-11 00:57:20.441996 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:20.442003 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.442010 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.442050 | orchestrator | 2026-03-11 00:57:20.442057 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2026-03-11 00:57:20.442063 | orchestrator | Wednesday 11 March 2026 00:54:37 +0000 (0:00:00.656) 0:00:08.251 ******* 2026-03-11 00:57:20.442075 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.442083 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.442089 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:20.442096 | orchestrator | 2026-03-11 00:57:20.442104 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2026-03-11 00:57:20.442111 | orchestrator | Wednesday 11 March 2026 00:54:39 +0000 (0:00:01.403) 0:00:09.655 ******* 2026-03-11 00:57:20.442130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:20.442142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:20.442155 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:20.442160 | orchestrator | 2026-03-11 00:57:20.442165 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2026-03-11 00:57:20.442170 | orchestrator | Wednesday 11 March 2026 00:54:42 +0000 (0:00:02.848) 0:00:12.503 ******* 2026-03-11 00:57:20.442175 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.442179 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.442184 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:20.442191 | orchestrator | 2026-03-11 00:57:20.442198 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2026-03-11 00:57:20.442205 | orchestrator | Wednesday 11 March 2026 00:54:43 +0000 (0:00:01.199) 0:00:13.702 ******* 2026-03-11 00:57:20.442212 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:20.442223 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:20.442230 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:20.442237 | orchestrator | 2026-03-11 00:57:20.442244 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-11 00:57:20.442248 | orchestrator | Wednesday 11 March 2026 00:54:46 +0000 (0:00:03.284) 0:00:16.987 ******* 2026-03-11 00:57:20.442253 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:57:20.442257 | orchestrator | 2026-03-11 00:57:20.442261 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2026-03-11 00:57:20.442265 | orchestrator | Wednesday 11 March 2026 00:54:47 +0000 (0:00:00.448) 0:00:17.436 ******* 2026-03-11 00:57:20.442274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:20.442280 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.442290 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:20.442308 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:20.442320 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:20.442328 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.442335 | orchestrator | 2026-03-11 00:57:20.442342 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2026-03-11 00:57:20.442349 | orchestrator | Wednesday 11 March 2026 00:54:49 +0000 (0:00:02.833) 0:00:20.269 ******* 2026-03-11 00:57:20.442360 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:20.442372 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.442380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:20.442385 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:20.442395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:20.442403 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.442407 | orchestrator | 2026-03-11 00:57:20.442411 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2026-03-11 00:57:20.442415 | orchestrator | Wednesday 11 March 2026 00:54:52 +0000 (0:00:02.299) 0:00:22.569 ******* 2026-03-11 00:57:20.442420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:20.442424 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.442432 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:20.442439 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:20.442446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2026-03-11 00:57:20.442451 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.442458 | orchestrator | 2026-03-11 00:57:20.442467 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2026-03-11 00:57:20.442477 | orchestrator | Wednesday 11 March 2026 00:54:54 +0000 (0:00:02.308) 0:00:24.877 ******* 2026-03-11 00:57:20.442489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:20.442505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:20.442519 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2026-03-11 00:57:20.442526 | orchestrator | 2026-03-11 00:57:20.442533 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2026-03-11 00:57:20.442539 | orchestrator | Wednesday 11 March 2026 00:54:57 +0000 (0:00:03.100) 0:00:27.978 ******* 2026-03-11 00:57:20.442546 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:20.442558 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:20.442564 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:20.442570 | orchestrator | 2026-03-11 00:57:20.442577 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2026-03-11 00:57:20.442583 | orchestrator | Wednesday 11 March 2026 00:54:58 +0000 (0:00:01.064) 0:00:29.043 ******* 2026-03-11 00:57:20.442590 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:20.442596 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:20.442603 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:20.442610 | orchestrator | 2026-03-11 00:57:20.442617 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2026-03-11 00:57:20.442624 | orchestrator | Wednesday 11 March 2026 00:54:59 +0000 (0:00:00.457) 0:00:29.501 ******* 2026-03-11 00:57:20.442631 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:20.442638 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:20.442645 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:20.442651 | orchestrator | 2026-03-11 00:57:20.442673 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2026-03-11 00:57:20.442684 | orchestrator | Wednesday 11 March 2026 00:54:59 +0000 (0:00:00.606) 0:00:30.107 ******* 2026-03-11 00:57:20.442692 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2026-03-11 00:57:20.442698 | orchestrator | ...ignoring 2026-03-11 00:57:20.442705 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2026-03-11 00:57:20.442711 | orchestrator | ...ignoring 2026-03-11 00:57:20.442717 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2026-03-11 00:57:20.442723 | orchestrator | ...ignoring 2026-03-11 00:57:20.442730 | orchestrator | 2026-03-11 00:57:20.442736 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2026-03-11 00:57:20.442743 | orchestrator | Wednesday 11 March 2026 00:55:10 +0000 (0:00:11.038) 0:00:41.146 ******* 2026-03-11 00:57:20.442749 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:20.442757 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:20.442763 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:20.442770 | orchestrator | 2026-03-11 00:57:20.442777 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2026-03-11 00:57:20.442785 | orchestrator | Wednesday 11 March 2026 00:55:11 +0000 (0:00:00.463) 0:00:41.610 ******* 2026-03-11 00:57:20.442792 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:20.442799 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.442806 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.442813 | orchestrator | 2026-03-11 00:57:20.442820 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2026-03-11 00:57:20.442827 | orchestrator | Wednesday 11 March 2026 00:55:12 +0000 (0:00:00.753) 0:00:42.363 ******* 2026-03-11 00:57:20.442833 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:20.442837 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.442841 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.442845 | orchestrator | 2026-03-11 00:57:20.442849 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2026-03-11 00:57:20.442854 | orchestrator | Wednesday 11 March 2026 00:55:12 +0000 (0:00:00.459) 0:00:42.823 ******* 2026-03-11 00:57:20.442858 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:20.442862 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.442866 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.442870 | orchestrator | 2026-03-11 00:57:20.442874 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2026-03-11 00:57:20.442878 | orchestrator | Wednesday 11 March 2026 00:55:12 +0000 (0:00:00.425) 0:00:43.248 ******* 2026-03-11 00:57:20.442882 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:20.442891 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:20.442895 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:20.442899 | orchestrator | 2026-03-11 00:57:20.442903 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2026-03-11 00:57:20.442907 | orchestrator | Wednesday 11 March 2026 00:55:13 +0000 (0:00:00.500) 0:00:43.749 ******* 2026-03-11 00:57:20.442918 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:20.442925 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.442931 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.442938 | orchestrator | 2026-03-11 00:57:20.442944 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-11 00:57:20.442951 | orchestrator | Wednesday 11 March 2026 00:55:14 +0000 (0:00:00.685) 0:00:44.435 ******* 2026-03-11 00:57:20.442958 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.442965 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.442970 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2026-03-11 00:57:20.442974 | orchestrator | 2026-03-11 00:57:20.442978 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2026-03-11 00:57:20.442982 | orchestrator | Wednesday 11 March 2026 00:55:14 +0000 (0:00:00.432) 0:00:44.868 ******* 2026-03-11 00:57:20.442986 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:20.442990 | orchestrator | 2026-03-11 00:57:20.442994 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2026-03-11 00:57:20.442998 | orchestrator | Wednesday 11 March 2026 00:55:24 +0000 (0:00:10.282) 0:00:55.150 ******* 2026-03-11 00:57:20.443002 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:20.443006 | orchestrator | 2026-03-11 00:57:20.443010 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2026-03-11 00:57:20.443015 | orchestrator | Wednesday 11 March 2026 00:55:24 +0000 (0:00:00.147) 0:00:55.298 ******* 2026-03-11 00:57:20.443019 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:20.443023 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.443027 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.443031 | orchestrator | 2026-03-11 00:57:20.443035 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2026-03-11 00:57:20.443039 | orchestrator | Wednesday 11 March 2026 00:55:25 +0000 (0:00:01.004) 0:00:56.302 ******* 2026-03-11 00:57:20.443043 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:20.443047 | orchestrator | 2026-03-11 00:57:20.443051 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2026-03-11 00:57:20.443055 | orchestrator | Wednesday 11 March 2026 00:55:33 +0000 (0:00:07.854) 0:01:04.157 ******* 2026-03-11 00:57:20.443059 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:20.443063 | orchestrator | 2026-03-11 00:57:20.443067 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2026-03-11 00:57:20.443071 | orchestrator | Wednesday 11 March 2026 00:55:35 +0000 (0:00:01.668) 0:01:05.826 ******* 2026-03-11 00:57:20.443076 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:20.443083 | orchestrator | 2026-03-11 00:57:20.443090 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2026-03-11 00:57:20.443097 | orchestrator | Wednesday 11 March 2026 00:55:38 +0000 (0:00:02.543) 0:01:08.369 ******* 2026-03-11 00:57:20.443103 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:20.443111 | orchestrator | 2026-03-11 00:57:20.443122 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2026-03-11 00:57:20.443127 | orchestrator | Wednesday 11 March 2026 00:55:38 +0000 (0:00:00.143) 0:01:08.512 ******* 2026-03-11 00:57:20.443132 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:20.443138 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.443145 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.443151 | orchestrator | 2026-03-11 00:57:20.443157 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2026-03-11 00:57:20.443163 | orchestrator | Wednesday 11 March 2026 00:55:38 +0000 (0:00:00.364) 0:01:08.877 ******* 2026-03-11 00:57:20.443175 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:20.443182 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:20.443189 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:20.443197 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2026-03-11 00:57:20.443202 | orchestrator | 2026-03-11 00:57:20.443206 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2026-03-11 00:57:20.443210 | orchestrator | skipping: no hosts matched 2026-03-11 00:57:20.443214 | orchestrator | 2026-03-11 00:57:20.443218 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-11 00:57:20.443222 | orchestrator | 2026-03-11 00:57:20.443226 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-11 00:57:20.443230 | orchestrator | Wednesday 11 March 2026 00:55:38 +0000 (0:00:00.467) 0:01:09.345 ******* 2026-03-11 00:57:20.443234 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:57:20.443238 | orchestrator | 2026-03-11 00:57:20.443242 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-11 00:57:20.443246 | orchestrator | Wednesday 11 March 2026 00:56:01 +0000 (0:00:22.247) 0:01:31.592 ******* 2026-03-11 00:57:20.443250 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:20.443254 | orchestrator | 2026-03-11 00:57:20.443258 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-11 00:57:20.443262 | orchestrator | Wednesday 11 March 2026 00:56:11 +0000 (0:00:10.567) 0:01:42.159 ******* 2026-03-11 00:57:20.443266 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:20.443271 | orchestrator | 2026-03-11 00:57:20.443275 | orchestrator | PLAY [Start mariadb services] ************************************************** 2026-03-11 00:57:20.443279 | orchestrator | 2026-03-11 00:57:20.443283 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-11 00:57:20.443287 | orchestrator | Wednesday 11 March 2026 00:56:14 +0000 (0:00:02.397) 0:01:44.556 ******* 2026-03-11 00:57:20.443291 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:57:20.443295 | orchestrator | 2026-03-11 00:57:20.443299 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-11 00:57:20.443303 | orchestrator | Wednesday 11 March 2026 00:56:29 +0000 (0:00:15.076) 0:01:59.633 ******* 2026-03-11 00:57:20.443307 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:20.443311 | orchestrator | 2026-03-11 00:57:20.443315 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-11 00:57:20.443320 | orchestrator | Wednesday 11 March 2026 00:56:44 +0000 (0:00:15.603) 0:02:15.237 ******* 2026-03-11 00:57:20.443324 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:20.443328 | orchestrator | 2026-03-11 00:57:20.443332 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2026-03-11 00:57:20.443336 | orchestrator | 2026-03-11 00:57:20.443344 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2026-03-11 00:57:20.443348 | orchestrator | Wednesday 11 March 2026 00:56:47 +0000 (0:00:02.487) 0:02:17.724 ******* 2026-03-11 00:57:20.443352 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:20.443356 | orchestrator | 2026-03-11 00:57:20.443361 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2026-03-11 00:57:20.443365 | orchestrator | Wednesday 11 March 2026 00:56:59 +0000 (0:00:11.908) 0:02:29.632 ******* 2026-03-11 00:57:20.443369 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:20.443373 | orchestrator | 2026-03-11 00:57:20.443377 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2026-03-11 00:57:20.443381 | orchestrator | Wednesday 11 March 2026 00:57:03 +0000 (0:00:04.584) 0:02:34.217 ******* 2026-03-11 00:57:20.443385 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:20.443389 | orchestrator | 2026-03-11 00:57:20.443393 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2026-03-11 00:57:20.443397 | orchestrator | 2026-03-11 00:57:20.443401 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2026-03-11 00:57:20.443405 | orchestrator | Wednesday 11 March 2026 00:57:06 +0000 (0:00:02.639) 0:02:36.857 ******* 2026-03-11 00:57:20.443413 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:57:20.443417 | orchestrator | 2026-03-11 00:57:20.443421 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2026-03-11 00:57:20.443425 | orchestrator | Wednesday 11 March 2026 00:57:07 +0000 (0:00:00.510) 0:02:37.368 ******* 2026-03-11 00:57:20.443429 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.443433 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.443437 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:20.443441 | orchestrator | 2026-03-11 00:57:20.443445 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2026-03-11 00:57:20.443450 | orchestrator | Wednesday 11 March 2026 00:57:09 +0000 (0:00:02.207) 0:02:39.575 ******* 2026-03-11 00:57:20.443454 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.443458 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.443465 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:20.443471 | orchestrator | 2026-03-11 00:57:20.443478 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2026-03-11 00:57:20.443488 | orchestrator | Wednesday 11 March 2026 00:57:11 +0000 (0:00:02.154) 0:02:41.730 ******* 2026-03-11 00:57:20.443496 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.443503 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.443509 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:20.443516 | orchestrator | 2026-03-11 00:57:20.443522 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2026-03-11 00:57:20.443530 | orchestrator | Wednesday 11 March 2026 00:57:13 +0000 (0:00:02.210) 0:02:43.940 ******* 2026-03-11 00:57:20.443536 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.443547 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.443554 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:57:20.443561 | orchestrator | 2026-03-11 00:57:20.443569 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2026-03-11 00:57:20.443576 | orchestrator | Wednesday 11 March 2026 00:57:15 +0000 (0:00:02.150) 0:02:46.091 ******* 2026-03-11 00:57:20.443581 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:57:20.443586 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:57:20.443590 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:57:20.443594 | orchestrator | 2026-03-11 00:57:20.443598 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2026-03-11 00:57:20.443611 | orchestrator | Wednesday 11 March 2026 00:57:18 +0000 (0:00:03.045) 0:02:49.136 ******* 2026-03-11 00:57:20.443626 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:57:20.443635 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:57:20.443642 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:57:20.443649 | orchestrator | 2026-03-11 00:57:20.443671 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:57:20.443679 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2026-03-11 00:57:20.443686 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2026-03-11 00:57:20.443693 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-11 00:57:20.443700 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2026-03-11 00:57:20.443706 | orchestrator | 2026-03-11 00:57:20.443713 | orchestrator | 2026-03-11 00:57:20.443719 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:57:20.443727 | orchestrator | Wednesday 11 March 2026 00:57:19 +0000 (0:00:00.254) 0:02:49.390 ******* 2026-03-11 00:57:20.443740 | orchestrator | =============================================================================== 2026-03-11 00:57:20.443747 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 37.32s 2026-03-11 00:57:20.443754 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 26.17s 2026-03-11 00:57:20.443760 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 11.91s 2026-03-11 00:57:20.443768 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.04s 2026-03-11 00:57:20.443774 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.28s 2026-03-11 00:57:20.443781 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.85s 2026-03-11 00:57:20.443790 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 4.88s 2026-03-11 00:57:20.443795 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.58s 2026-03-11 00:57:20.443799 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 3.28s 2026-03-11 00:57:20.443803 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.10s 2026-03-11 00:57:20.443807 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.05s 2026-03-11 00:57:20.443811 | orchestrator | Check MariaDB service --------------------------------------------------- 2.97s 2026-03-11 00:57:20.443815 | orchestrator | mariadb : Copying over config.json files for services ------------------- 2.85s 2026-03-11 00:57:20.443819 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 2.83s 2026-03-11 00:57:20.443823 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 2.66s 2026-03-11 00:57:20.443827 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.64s 2026-03-11 00:57:20.443831 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.54s 2026-03-11 00:57:20.443835 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.31s 2026-03-11 00:57:20.443839 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.30s 2026-03-11 00:57:20.443844 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.21s 2026-03-11 00:57:20.445942 | orchestrator | 2026-03-11 00:57:20 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:20.447555 | orchestrator | 2026-03-11 00:57:20 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:57:20.449482 | orchestrator | 2026-03-11 00:57:20 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:57:20.449897 | orchestrator | 2026-03-11 00:57:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:23.509646 | orchestrator | 2026-03-11 00:57:23 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:23.511900 | orchestrator | 2026-03-11 00:57:23 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:57:23.513636 | orchestrator | 2026-03-11 00:57:23 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:57:23.513995 | orchestrator | 2026-03-11 00:57:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:26.561375 | orchestrator | 2026-03-11 00:57:26 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:26.563729 | orchestrator | 2026-03-11 00:57:26 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:57:26.565723 | orchestrator | 2026-03-11 00:57:26 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:57:26.567788 | orchestrator | 2026-03-11 00:57:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:29.607218 | orchestrator | 2026-03-11 00:57:29 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:29.609236 | orchestrator | 2026-03-11 00:57:29 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:57:29.611016 | orchestrator | 2026-03-11 00:57:29 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:57:29.611067 | orchestrator | 2026-03-11 00:57:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:32.639403 | orchestrator | 2026-03-11 00:57:32 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:32.640266 | orchestrator | 2026-03-11 00:57:32 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:57:32.640337 | orchestrator | 2026-03-11 00:57:32 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:57:32.640359 | orchestrator | 2026-03-11 00:57:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:35.674999 | orchestrator | 2026-03-11 00:57:35 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:35.675088 | orchestrator | 2026-03-11 00:57:35 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:57:35.675773 | orchestrator | 2026-03-11 00:57:35 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:57:35.675809 | orchestrator | 2026-03-11 00:57:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:38.701839 | orchestrator | 2026-03-11 00:57:38 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:38.703310 | orchestrator | 2026-03-11 00:57:38 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:57:38.705598 | orchestrator | 2026-03-11 00:57:38 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:57:38.705810 | orchestrator | 2026-03-11 00:57:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:41.736842 | orchestrator | 2026-03-11 00:57:41 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:41.738144 | orchestrator | 2026-03-11 00:57:41 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:57:41.740599 | orchestrator | 2026-03-11 00:57:41 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:57:41.740976 | orchestrator | 2026-03-11 00:57:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:44.774094 | orchestrator | 2026-03-11 00:57:44 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:44.780497 | orchestrator | 2026-03-11 00:57:44 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:57:44.784912 | orchestrator | 2026-03-11 00:57:44 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:57:44.784955 | orchestrator | 2026-03-11 00:57:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:47.833061 | orchestrator | 2026-03-11 00:57:47 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:47.834822 | orchestrator | 2026-03-11 00:57:47 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:57:47.836463 | orchestrator | 2026-03-11 00:57:47 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:57:47.836801 | orchestrator | 2026-03-11 00:57:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:50.879842 | orchestrator | 2026-03-11 00:57:50 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:50.881209 | orchestrator | 2026-03-11 00:57:50 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:57:50.885190 | orchestrator | 2026-03-11 00:57:50 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:57:50.885266 | orchestrator | 2026-03-11 00:57:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:53.940919 | orchestrator | 2026-03-11 00:57:53 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:53.942394 | orchestrator | 2026-03-11 00:57:53 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:57:53.944917 | orchestrator | 2026-03-11 00:57:53 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:57:53.945043 | orchestrator | 2026-03-11 00:57:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:57:56.988460 | orchestrator | 2026-03-11 00:57:56 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:57:56.990810 | orchestrator | 2026-03-11 00:57:56 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:57:56.992044 | orchestrator | 2026-03-11 00:57:56 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:57:56.992082 | orchestrator | 2026-03-11 00:57:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:00.041193 | orchestrator | 2026-03-11 00:58:00 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:58:00.043046 | orchestrator | 2026-03-11 00:58:00 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:00.045477 | orchestrator | 2026-03-11 00:58:00 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:00.045524 | orchestrator | 2026-03-11 00:58:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:03.086828 | orchestrator | 2026-03-11 00:58:03 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:58:03.087544 | orchestrator | 2026-03-11 00:58:03 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:03.088833 | orchestrator | 2026-03-11 00:58:03 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:03.089166 | orchestrator | 2026-03-11 00:58:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:06.138343 | orchestrator | 2026-03-11 00:58:06 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:58:06.141910 | orchestrator | 2026-03-11 00:58:06 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:06.143214 | orchestrator | 2026-03-11 00:58:06 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:06.143467 | orchestrator | 2026-03-11 00:58:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:09.192879 | orchestrator | 2026-03-11 00:58:09 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:58:09.195224 | orchestrator | 2026-03-11 00:58:09 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:09.199581 | orchestrator | 2026-03-11 00:58:09 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:09.199761 | orchestrator | 2026-03-11 00:58:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:12.230884 | orchestrator | 2026-03-11 00:58:12 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:58:12.231840 | orchestrator | 2026-03-11 00:58:12 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:12.233697 | orchestrator | 2026-03-11 00:58:12 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:12.233753 | orchestrator | 2026-03-11 00:58:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:15.286319 | orchestrator | 2026-03-11 00:58:15 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:58:15.286382 | orchestrator | 2026-03-11 00:58:15 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:15.287654 | orchestrator | 2026-03-11 00:58:15 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:15.287714 | orchestrator | 2026-03-11 00:58:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:18.333108 | orchestrator | 2026-03-11 00:58:18 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:58:18.337107 | orchestrator | 2026-03-11 00:58:18 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:18.340967 | orchestrator | 2026-03-11 00:58:18 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:18.342251 | orchestrator | 2026-03-11 00:58:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:21.390395 | orchestrator | 2026-03-11 00:58:21 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state STARTED 2026-03-11 00:58:21.392804 | orchestrator | 2026-03-11 00:58:21 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:21.395927 | orchestrator | 2026-03-11 00:58:21 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:21.395965 | orchestrator | 2026-03-11 00:58:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:24.453424 | orchestrator | 2026-03-11 00:58:24 | INFO  | Task ea03c371-604b-4748-bb9e-f9bd7d4f7be1 is in state SUCCESS 2026-03-11 00:58:24.454335 | orchestrator | 2026-03-11 00:58:24.454366 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-11 00:58:24.454373 | orchestrator | 2.16.14 2026-03-11 00:58:24.454379 | orchestrator | 2026-03-11 00:58:24.454385 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2026-03-11 00:58:24.454390 | orchestrator | 2026-03-11 00:58:24.454396 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2026-03-11 00:58:24.454402 | orchestrator | Wednesday 11 March 2026 00:56:20 +0000 (0:00:00.567) 0:00:00.567 ******* 2026-03-11 00:58:24.454407 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:58:24.454413 | orchestrator | 2026-03-11 00:58:24.454418 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2026-03-11 00:58:24.454424 | orchestrator | Wednesday 11 March 2026 00:56:20 +0000 (0:00:00.590) 0:00:01.158 ******* 2026-03-11 00:58:24.454429 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.454435 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.454441 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.454446 | orchestrator | 2026-03-11 00:58:24.454452 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2026-03-11 00:58:24.454457 | orchestrator | Wednesday 11 March 2026 00:56:21 +0000 (0:00:00.655) 0:00:01.814 ******* 2026-03-11 00:58:24.454462 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.454468 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.454473 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.454478 | orchestrator | 2026-03-11 00:58:24.454484 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2026-03-11 00:58:24.454489 | orchestrator | Wednesday 11 March 2026 00:56:21 +0000 (0:00:00.270) 0:00:02.085 ******* 2026-03-11 00:58:24.454494 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.454500 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.454519 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.454525 | orchestrator | 2026-03-11 00:58:24.454530 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2026-03-11 00:58:24.454535 | orchestrator | Wednesday 11 March 2026 00:56:22 +0000 (0:00:00.719) 0:00:02.804 ******* 2026-03-11 00:58:24.454541 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.454546 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.454551 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.454556 | orchestrator | 2026-03-11 00:58:24.454562 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2026-03-11 00:58:24.454567 | orchestrator | Wednesday 11 March 2026 00:56:22 +0000 (0:00:00.267) 0:00:03.072 ******* 2026-03-11 00:58:24.454572 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.454578 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.454583 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.454588 | orchestrator | 2026-03-11 00:58:24.454594 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2026-03-11 00:58:24.454626 | orchestrator | Wednesday 11 March 2026 00:56:23 +0000 (0:00:00.271) 0:00:03.344 ******* 2026-03-11 00:58:24.454632 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.454638 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.454643 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.454648 | orchestrator | 2026-03-11 00:58:24.454653 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2026-03-11 00:58:24.454702 | orchestrator | Wednesday 11 March 2026 00:56:23 +0000 (0:00:00.266) 0:00:03.611 ******* 2026-03-11 00:58:24.454708 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.454714 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.454739 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.454776 | orchestrator | 2026-03-11 00:58:24.454782 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2026-03-11 00:58:24.454787 | orchestrator | Wednesday 11 March 2026 00:56:23 +0000 (0:00:00.386) 0:00:03.997 ******* 2026-03-11 00:58:24.454806 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.454811 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.454816 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.454977 | orchestrator | 2026-03-11 00:58:24.454985 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2026-03-11 00:58:24.454990 | orchestrator | Wednesday 11 March 2026 00:56:23 +0000 (0:00:00.263) 0:00:04.261 ******* 2026-03-11 00:58:24.454995 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:58:24.455001 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:58:24.455006 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:58:24.455012 | orchestrator | 2026-03-11 00:58:24.455017 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2026-03-11 00:58:24.455022 | orchestrator | Wednesday 11 March 2026 00:56:24 +0000 (0:00:00.590) 0:00:04.852 ******* 2026-03-11 00:58:24.455028 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.455033 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.455038 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.455044 | orchestrator | 2026-03-11 00:58:24.455049 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2026-03-11 00:58:24.455055 | orchestrator | Wednesday 11 March 2026 00:56:24 +0000 (0:00:00.388) 0:00:05.241 ******* 2026-03-11 00:58:24.455068 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:58:24.455074 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:58:24.455079 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:58:24.455084 | orchestrator | 2026-03-11 00:58:24.455090 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2026-03-11 00:58:24.455095 | orchestrator | Wednesday 11 March 2026 00:56:26 +0000 (0:00:01.959) 0:00:07.200 ******* 2026-03-11 00:58:24.455106 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-11 00:58:24.455112 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-11 00:58:24.455117 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-11 00:58:24.455122 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455128 | orchestrator | 2026-03-11 00:58:24.455141 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2026-03-11 00:58:24.455147 | orchestrator | Wednesday 11 March 2026 00:56:27 +0000 (0:00:00.545) 0:00:07.745 ******* 2026-03-11 00:58:24.455153 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.455160 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.455165 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.455284 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455294 | orchestrator | 2026-03-11 00:58:24.455299 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2026-03-11 00:58:24.455305 | orchestrator | Wednesday 11 March 2026 00:56:28 +0000 (0:00:00.694) 0:00:08.439 ******* 2026-03-11 00:58:24.455311 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.455317 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.455323 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.455329 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455334 | orchestrator | 2026-03-11 00:58:24.455340 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2026-03-11 00:58:24.455345 | orchestrator | Wednesday 11 March 2026 00:56:28 +0000 (0:00:00.447) 0:00:08.886 ******* 2026-03-11 00:58:24.455352 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '90ff566711cd', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2026-03-11 00:56:25.568971', 'end': '2026-03-11 00:56:25.595079', 'delta': '0:00:00.026108', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['90ff566711cd'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2026-03-11 00:58:24.455367 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '32ce41b04815', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2026-03-11 00:56:26.224102', 'end': '2026-03-11 00:56:26.256489', 'delta': '0:00:00.032387', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['32ce41b04815'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2026-03-11 00:58:24.455387 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '882054576c29', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2026-03-11 00:56:26.738843', 'end': '2026-03-11 00:56:26.771839', 'delta': '0:00:00.032996', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['882054576c29'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2026-03-11 00:58:24.455393 | orchestrator | 2026-03-11 00:58:24.455399 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2026-03-11 00:58:24.455404 | orchestrator | Wednesday 11 March 2026 00:56:28 +0000 (0:00:00.197) 0:00:09.084 ******* 2026-03-11 00:58:24.455409 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.455414 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.455420 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.455425 | orchestrator | 2026-03-11 00:58:24.455430 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2026-03-11 00:58:24.455435 | orchestrator | Wednesday 11 March 2026 00:56:29 +0000 (0:00:00.438) 0:00:09.522 ******* 2026-03-11 00:58:24.455441 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2026-03-11 00:58:24.455446 | orchestrator | 2026-03-11 00:58:24.455451 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2026-03-11 00:58:24.455457 | orchestrator | Wednesday 11 March 2026 00:56:30 +0000 (0:00:01.755) 0:00:11.278 ******* 2026-03-11 00:58:24.455462 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455468 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.455473 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.455479 | orchestrator | 2026-03-11 00:58:24.455484 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2026-03-11 00:58:24.455489 | orchestrator | Wednesday 11 March 2026 00:56:31 +0000 (0:00:00.341) 0:00:11.619 ******* 2026-03-11 00:58:24.455495 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455500 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.455505 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.455511 | orchestrator | 2026-03-11 00:58:24.455516 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-11 00:58:24.455521 | orchestrator | Wednesday 11 March 2026 00:56:31 +0000 (0:00:00.461) 0:00:12.080 ******* 2026-03-11 00:58:24.455527 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455532 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.455537 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.455542 | orchestrator | 2026-03-11 00:58:24.455548 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2026-03-11 00:58:24.455553 | orchestrator | Wednesday 11 March 2026 00:56:32 +0000 (0:00:00.541) 0:00:12.622 ******* 2026-03-11 00:58:24.455568 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.455573 | orchestrator | 2026-03-11 00:58:24.455578 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2026-03-11 00:58:24.455584 | orchestrator | Wednesday 11 March 2026 00:56:32 +0000 (0:00:00.139) 0:00:12.761 ******* 2026-03-11 00:58:24.455589 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455594 | orchestrator | 2026-03-11 00:58:24.455629 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2026-03-11 00:58:24.455635 | orchestrator | Wednesday 11 March 2026 00:56:32 +0000 (0:00:00.257) 0:00:13.019 ******* 2026-03-11 00:58:24.455641 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455647 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.455652 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.455658 | orchestrator | 2026-03-11 00:58:24.455662 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2026-03-11 00:58:24.455668 | orchestrator | Wednesday 11 March 2026 00:56:33 +0000 (0:00:00.284) 0:00:13.304 ******* 2026-03-11 00:58:24.455672 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455677 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.455682 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.455687 | orchestrator | 2026-03-11 00:58:24.455692 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2026-03-11 00:58:24.455698 | orchestrator | Wednesday 11 March 2026 00:56:33 +0000 (0:00:00.324) 0:00:13.629 ******* 2026-03-11 00:58:24.455703 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455708 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.455713 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.455719 | orchestrator | 2026-03-11 00:58:24.455724 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2026-03-11 00:58:24.455729 | orchestrator | Wednesday 11 March 2026 00:56:33 +0000 (0:00:00.472) 0:00:14.101 ******* 2026-03-11 00:58:24.455735 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455743 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.455749 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.455754 | orchestrator | 2026-03-11 00:58:24.455759 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2026-03-11 00:58:24.455765 | orchestrator | Wednesday 11 March 2026 00:56:34 +0000 (0:00:00.309) 0:00:14.410 ******* 2026-03-11 00:58:24.455770 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455775 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.455780 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.455786 | orchestrator | 2026-03-11 00:58:24.455791 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2026-03-11 00:58:24.455796 | orchestrator | Wednesday 11 March 2026 00:56:34 +0000 (0:00:00.295) 0:00:14.706 ******* 2026-03-11 00:58:24.455801 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455807 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.455812 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.455832 | orchestrator | 2026-03-11 00:58:24.455838 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2026-03-11 00:58:24.455844 | orchestrator | Wednesday 11 March 2026 00:56:34 +0000 (0:00:00.296) 0:00:15.003 ******* 2026-03-11 00:58:24.455849 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.455854 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.455859 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.455864 | orchestrator | 2026-03-11 00:58:24.455870 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2026-03-11 00:58:24.455875 | orchestrator | Wednesday 11 March 2026 00:56:35 +0000 (0:00:00.460) 0:00:15.463 ******* 2026-03-11 00:58:24.455881 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71564836--6f16--509c--9c2d--06150302b625-osd--block--71564836--6f16--509c--9c2d--06150302b625', 'dm-uuid-LVM-pyZ5rB0R0qmIWUxI5gCQVKaKF0hu4glj74GAuXfKv2MAaOoBo1mxVFBDd2JymnHg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.455924 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--20faa7ec--42ec--56bc--96e8--0b7388032f08-osd--block--20faa7ec--42ec--56bc--96e8--0b7388032f08', 'dm-uuid-LVM-pXd1UaKkJmiNo8fAWwtODo0F9CzuBWMNam2cYCT1dcxyx2pRueNkuIYX2dwy7nwk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.455931 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.455938 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.455943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.455949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.455957 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.455976 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.455982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.455991 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.455998 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456007 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--71564836--6f16--509c--9c2d--06150302b625-osd--block--71564836--6f16--509c--9c2d--06150302b625'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ivV1Pd-GQUU-0hyB-f198-psgw-Gkx3-f2lD49', 'scsi-0QEMU_QEMU_HARDDISK_093a0f58-cc4b-4485-9e6f-5c5128ebf642', 'scsi-SQEMU_QEMU_HARDDISK_093a0f58-cc4b-4485-9e6f-5c5128ebf642'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2fb06152--6c58--5f9b--bb14--a51d715c3982-osd--block--2fb06152--6c58--5f9b--bb14--a51d715c3982', 'dm-uuid-LVM-7Uuvgqh6NcBREtc01Xdtz3qAOv3zfovluPSUPEC7NhlzmhxC0Nc6POtStmfO1Wdw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456042 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--20faa7ec--42ec--56bc--96e8--0b7388032f08-osd--block--20faa7ec--42ec--56bc--96e8--0b7388032f08'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fAR1X5-7HZS-e9KQ-Z8pC-qVVR-MPmq-1ajZSi', 'scsi-0QEMU_QEMU_HARDDISK_ae1c2658-52b8-455d-907b-e7170e3050e5', 'scsi-SQEMU_QEMU_HARDDISK_ae1c2658-52b8-455d-907b-e7170e3050e5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456047 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2e0b0e2c--c482--530c--847f--054ffec93e8e-osd--block--2e0b0e2c--c482--530c--847f--054ffec93e8e', 'dm-uuid-LVM-AKpMPdveCGqZfTHNqUdOrwypZcJWcalbIZh1AdPadOXUp4IlZWBvWWFtgVHCFWIq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456053 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ff314bd-8772-4cae-a8e3-239e2ae43cb3', 'scsi-SQEMU_QEMU_HARDDISK_8ff314bd-8772-4cae-a8e3-239e2ae43cb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456059 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456064 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456072 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456091 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456116 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.456122 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456127 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456154 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2fb06152--6c58--5f9b--bb14--a51d715c3982-osd--block--2fb06152--6c58--5f9b--bb14--a51d715c3982'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lY4cgz-KPol-Cy9h-jYPc-tiHv-Zjms-O98Zn3', 'scsi-0QEMU_QEMU_HARDDISK_eb5be362-3b33-4846-8138-86194f5d1a8a', 'scsi-SQEMU_QEMU_HARDDISK_eb5be362-3b33-4846-8138-86194f5d1a8a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456160 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--2e0b0e2c--c482--530c--847f--054ffec93e8e-osd--block--2e0b0e2c--c482--530c--847f--054ffec93e8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fMJKz6-77i5-37CY-TSkd-IvL9-nNqV-LEHCjI', 'scsi-0QEMU_QEMU_HARDDISK_f36f8e1d-14c5-427c-b242-d446b19c77db', 'scsi-SQEMU_QEMU_HARDDISK_f36f8e1d-14c5-427c-b242-d446b19c77db'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456166 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288642ce-5fa9-4bc7-a508-61d675ea6136', 'scsi-SQEMU_QEMU_HARDDISK_288642ce-5fa9-4bc7-a508-61d675ea6136'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456171 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456179 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c12a1925--beca--5a04--a9cd--b492500b7146-osd--block--c12a1925--beca--5a04--a9cd--b492500b7146', 'dm-uuid-LVM-CWgETdHvS4Dy2AyHaaYd2xmULpdrXOiJcr9BFGM4S4KpW0eOZxQoG98LLDMBbi6M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456185 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.456195 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--75b18a9f--434b--5575--8ed7--e1e8868eceb5-osd--block--75b18a9f--434b--5575--8ed7--e1e8868eceb5', 'dm-uuid-LVM-17OUSIdr3HuYahsLwJHPMesEwkWU3kj0L7NymUjJrvhQFMjl04ZdJ0mGQS50dlGZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456209 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456220 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456231 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456236 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2026-03-11 00:58:24.456256 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part1', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part14', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part15', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part16', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456262 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--c12a1925--beca--5a04--a9cd--b492500b7146-osd--block--c12a1925--beca--5a04--a9cd--b492500b7146'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tuJMcM-uQnl-JSTs-WrnO-sWxn-3scz-VXnlPQ', 'scsi-0QEMU_QEMU_HARDDISK_7fe845d7-e58c-4b3d-846a-c114ba83f0c4', 'scsi-SQEMU_QEMU_HARDDISK_7fe845d7-e58c-4b3d-846a-c114ba83f0c4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--75b18a9f--434b--5575--8ed7--e1e8868eceb5-osd--block--75b18a9f--434b--5575--8ed7--e1e8868eceb5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qz6mOZ-2wp1-3a0W-Qzeb-M25K-Xnxh-aHxL2P', 'scsi-0QEMU_QEMU_HARDDISK_b058385a-4b50-41f2-be6b-aeff7a6e6499', 'scsi-SQEMU_QEMU_HARDDISK_b058385a-4b50-41f2-be6b-aeff7a6e6499'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456276 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc665229-5891-49fd-b2c5-1ba6ac78c628', 'scsi-SQEMU_QEMU_HARDDISK_fc665229-5891-49fd-b2c5-1ba6ac78c628'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2026-03-11 00:58:24.456305 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.456311 | orchestrator | 2026-03-11 00:58:24.456317 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2026-03-11 00:58:24.456322 | orchestrator | Wednesday 11 March 2026 00:56:35 +0000 (0:00:00.559) 0:00:16.022 ******* 2026-03-11 00:58:24.456328 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--71564836--6f16--509c--9c2d--06150302b625-osd--block--71564836--6f16--509c--9c2d--06150302b625', 'dm-uuid-LVM-pyZ5rB0R0qmIWUxI5gCQVKaKF0hu4glj74GAuXfKv2MAaOoBo1mxVFBDd2JymnHg'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456333 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--20faa7ec--42ec--56bc--96e8--0b7388032f08-osd--block--20faa7ec--42ec--56bc--96e8--0b7388032f08', 'dm-uuid-LVM-pXd1UaKkJmiNo8fAWwtODo0F9CzuBWMNam2cYCT1dcxyx2pRueNkuIYX2dwy7nwk'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456339 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456345 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456353 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456363 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456369 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456374 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456380 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456385 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456421 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part1', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part14', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part15', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part16', 'scsi-SQEMU_QEMU_HARDDISK_32780fff-28da-4ed5-b9f8-cc520a8285e8-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456431 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2fb06152--6c58--5f9b--bb14--a51d715c3982-osd--block--2fb06152--6c58--5f9b--bb14--a51d715c3982', 'dm-uuid-LVM-7Uuvgqh6NcBREtc01Xdtz3qAOv3zfovluPSUPEC7NhlzmhxC0Nc6POtStmfO1Wdw'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456437 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--71564836--6f16--509c--9c2d--06150302b625-osd--block--71564836--6f16--509c--9c2d--06150302b625'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ivV1Pd-GQUU-0hyB-f198-psgw-Gkx3-f2lD49', 'scsi-0QEMU_QEMU_HARDDISK_093a0f58-cc4b-4485-9e6f-5c5128ebf642', 'scsi-SQEMU_QEMU_HARDDISK_093a0f58-cc4b-4485-9e6f-5c5128ebf642'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456445 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--20faa7ec--42ec--56bc--96e8--0b7388032f08-osd--block--20faa7ec--42ec--56bc--96e8--0b7388032f08'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fAR1X5-7HZS-e9KQ-Z8pC-qVVR-MPmq-1ajZSi', 'scsi-0QEMU_QEMU_HARDDISK_ae1c2658-52b8-455d-907b-e7170e3050e5', 'scsi-SQEMU_QEMU_HARDDISK_ae1c2658-52b8-455d-907b-e7170e3050e5'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456457 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2e0b0e2c--c482--530c--847f--054ffec93e8e-osd--block--2e0b0e2c--c482--530c--847f--054ffec93e8e', 'dm-uuid-LVM-AKpMPdveCGqZfTHNqUdOrwypZcJWcalbIZh1AdPadOXUp4IlZWBvWWFtgVHCFWIq'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456463 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_8ff314bd-8772-4cae-a8e3-239e2ae43cb3', 'scsi-SQEMU_QEMU_HARDDISK_8ff314bd-8772-4cae-a8e3-239e2ae43cb3'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456469 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456474 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-57-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456480 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456492 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456501 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456506 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456511 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.456516 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456521 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456527 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--c12a1925--beca--5a04--a9cd--b492500b7146-osd--block--c12a1925--beca--5a04--a9cd--b492500b7146', 'dm-uuid-LVM-CWgETdHvS4Dy2AyHaaYd2xmULpdrXOiJcr9BFGM4S4KpW0eOZxQoG98LLDMBbi6M'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456538 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456547 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--75b18a9f--434b--5575--8ed7--e1e8868eceb5-osd--block--75b18a9f--434b--5575--8ed7--e1e8868eceb5', 'dm-uuid-LVM-17OUSIdr3HuYahsLwJHPMesEwkWU3kj0L7NymUjJrvhQFMjl04ZdJ0mGQS50dlGZ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456553 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456559 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part1', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part14', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part15', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part16', 'scsi-SQEMU_QEMU_HARDDISK_b5772bb3-dfe8-42a5-804b-c4140f3b8e5a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456571 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456580 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--2fb06152--6c58--5f9b--bb14--a51d715c3982-osd--block--2fb06152--6c58--5f9b--bb14--a51d715c3982'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-lY4cgz-KPol-Cy9h-jYPc-tiHv-Zjms-O98Zn3', 'scsi-0QEMU_QEMU_HARDDISK_eb5be362-3b33-4846-8138-86194f5d1a8a', 'scsi-SQEMU_QEMU_HARDDISK_eb5be362-3b33-4846-8138-86194f5d1a8a'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456585 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456591 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--2e0b0e2c--c482--530c--847f--054ffec93e8e-osd--block--2e0b0e2c--c482--530c--847f--054ffec93e8e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-fMJKz6-77i5-37CY-TSkd-IvL9-nNqV-LEHCjI', 'scsi-0QEMU_QEMU_HARDDISK_f36f8e1d-14c5-427c-b242-d446b19c77db', 'scsi-SQEMU_QEMU_HARDDISK_f36f8e1d-14c5-427c-b242-d446b19c77db'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456597 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456619 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_288642ce-5fa9-4bc7-a508-61d675ea6136', 'scsi-SQEMU_QEMU_HARDDISK_288642ce-5fa9-4bc7-a508-61d675ea6136'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456628 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456634 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-55-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456640 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456646 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.456651 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456660 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456671 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part1', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part14', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part15', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part16', 'scsi-SQEMU_QEMU_HARDDISK_cf393f00-e485-43dd-9184-e931a616dca6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456677 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--c12a1925--beca--5a04--a9cd--b492500b7146-osd--block--c12a1925--beca--5a04--a9cd--b492500b7146'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-tuJMcM-uQnl-JSTs-WrnO-sWxn-3scz-VXnlPQ', 'scsi-0QEMU_QEMU_HARDDISK_7fe845d7-e58c-4b3d-846a-c114ba83f0c4', 'scsi-SQEMU_QEMU_HARDDISK_7fe845d7-e58c-4b3d-846a-c114ba83f0c4'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456683 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--75b18a9f--434b--5575--8ed7--e1e8868eceb5-osd--block--75b18a9f--434b--5575--8ed7--e1e8868eceb5'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-qz6mOZ-2wp1-3a0W-Qzeb-M25K-Xnxh-aHxL2P', 'scsi-0QEMU_QEMU_HARDDISK_b058385a-4b50-41f2-be6b-aeff7a6e6499', 'scsi-SQEMU_QEMU_HARDDISK_b058385a-4b50-41f2-be6b-aeff7a6e6499'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456694 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_fc665229-5891-49fd-b2c5-1ba6ac78c628', 'scsi-SQEMU_QEMU_HARDDISK_fc665229-5891-49fd-b2c5-1ba6ac78c628'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456703 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2026-03-11-00-02-48-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2026-03-11 00:58:24.456707 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.456713 | orchestrator | 2026-03-11 00:58:24.456718 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2026-03-11 00:58:24.456723 | orchestrator | Wednesday 11 March 2026 00:56:36 +0000 (0:00:00.579) 0:00:16.602 ******* 2026-03-11 00:58:24.456729 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.456735 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.456740 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.456745 | orchestrator | 2026-03-11 00:58:24.456751 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2026-03-11 00:58:24.456815 | orchestrator | Wednesday 11 March 2026 00:56:36 +0000 (0:00:00.623) 0:00:17.226 ******* 2026-03-11 00:58:24.456820 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.456826 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.456831 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.456836 | orchestrator | 2026-03-11 00:58:24.456841 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-11 00:58:24.456847 | orchestrator | Wednesday 11 March 2026 00:56:37 +0000 (0:00:00.449) 0:00:17.676 ******* 2026-03-11 00:58:24.456852 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.456858 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.456863 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.456868 | orchestrator | 2026-03-11 00:58:24.456873 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-11 00:58:24.456879 | orchestrator | Wednesday 11 March 2026 00:56:38 +0000 (0:00:00.641) 0:00:18.317 ******* 2026-03-11 00:58:24.456888 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.456893 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.456899 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.456904 | orchestrator | 2026-03-11 00:58:24.456910 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2026-03-11 00:58:24.456915 | orchestrator | Wednesday 11 March 2026 00:56:38 +0000 (0:00:00.303) 0:00:18.621 ******* 2026-03-11 00:58:24.456921 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.456926 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.456931 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.456936 | orchestrator | 2026-03-11 00:58:24.456941 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2026-03-11 00:58:24.456945 | orchestrator | Wednesday 11 March 2026 00:56:38 +0000 (0:00:00.397) 0:00:19.018 ******* 2026-03-11 00:58:24.456950 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.456956 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.456961 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.456966 | orchestrator | 2026-03-11 00:58:24.456971 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2026-03-11 00:58:24.456976 | orchestrator | Wednesday 11 March 2026 00:56:39 +0000 (0:00:00.486) 0:00:19.505 ******* 2026-03-11 00:58:24.456980 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2026-03-11 00:58:24.456985 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2026-03-11 00:58:24.456990 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2026-03-11 00:58:24.456994 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2026-03-11 00:58:24.456999 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2026-03-11 00:58:24.457003 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2026-03-11 00:58:24.457007 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2026-03-11 00:58:24.457012 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2026-03-11 00:58:24.457016 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2026-03-11 00:58:24.457021 | orchestrator | 2026-03-11 00:58:24.457025 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2026-03-11 00:58:24.457030 | orchestrator | Wednesday 11 March 2026 00:56:40 +0000 (0:00:00.831) 0:00:20.337 ******* 2026-03-11 00:58:24.457034 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2026-03-11 00:58:24.457039 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2026-03-11 00:58:24.457044 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2026-03-11 00:58:24.457049 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.457055 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2026-03-11 00:58:24.457060 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2026-03-11 00:58:24.457064 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2026-03-11 00:58:24.457069 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.457074 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2026-03-11 00:58:24.457078 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2026-03-11 00:58:24.457086 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2026-03-11 00:58:24.457091 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.457095 | orchestrator | 2026-03-11 00:58:24.457100 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2026-03-11 00:58:24.457105 | orchestrator | Wednesday 11 March 2026 00:56:40 +0000 (0:00:00.450) 0:00:20.787 ******* 2026-03-11 00:58:24.457111 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 00:58:24.457116 | orchestrator | 2026-03-11 00:58:24.457120 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2026-03-11 00:58:24.457126 | orchestrator | Wednesday 11 March 2026 00:56:41 +0000 (0:00:00.716) 0:00:21.504 ******* 2026-03-11 00:58:24.457138 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.457144 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.457150 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.457156 | orchestrator | 2026-03-11 00:58:24.457161 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2026-03-11 00:58:24.457167 | orchestrator | Wednesday 11 March 2026 00:56:41 +0000 (0:00:00.331) 0:00:21.836 ******* 2026-03-11 00:58:24.457173 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.457178 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.457184 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.457189 | orchestrator | 2026-03-11 00:58:24.457195 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2026-03-11 00:58:24.457201 | orchestrator | Wednesday 11 March 2026 00:56:41 +0000 (0:00:00.321) 0:00:22.157 ******* 2026-03-11 00:58:24.457206 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.457212 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.457218 | orchestrator | skipping: [testbed-node-5] 2026-03-11 00:58:24.457223 | orchestrator | 2026-03-11 00:58:24.457229 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2026-03-11 00:58:24.457235 | orchestrator | Wednesday 11 March 2026 00:56:42 +0000 (0:00:00.283) 0:00:22.440 ******* 2026-03-11 00:58:24.457240 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.457246 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.457251 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.457257 | orchestrator | 2026-03-11 00:58:24.457262 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2026-03-11 00:58:24.457268 | orchestrator | Wednesday 11 March 2026 00:56:42 +0000 (0:00:00.848) 0:00:23.289 ******* 2026-03-11 00:58:24.457274 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:58:24.457279 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:58:24.457284 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:58:24.457290 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.457295 | orchestrator | 2026-03-11 00:58:24.457301 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2026-03-11 00:58:24.457306 | orchestrator | Wednesday 11 March 2026 00:56:43 +0000 (0:00:00.366) 0:00:23.656 ******* 2026-03-11 00:58:24.457311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:58:24.457317 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:58:24.457322 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:58:24.457336 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.457346 | orchestrator | 2026-03-11 00:58:24.457352 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2026-03-11 00:58:24.457357 | orchestrator | Wednesday 11 March 2026 00:56:43 +0000 (0:00:00.393) 0:00:24.049 ******* 2026-03-11 00:58:24.457363 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2026-03-11 00:58:24.457368 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2026-03-11 00:58:24.457373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2026-03-11 00:58:24.457379 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.457384 | orchestrator | 2026-03-11 00:58:24.457389 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2026-03-11 00:58:24.457395 | orchestrator | Wednesday 11 March 2026 00:56:44 +0000 (0:00:00.358) 0:00:24.408 ******* 2026-03-11 00:58:24.457400 | orchestrator | ok: [testbed-node-3] 2026-03-11 00:58:24.457405 | orchestrator | ok: [testbed-node-4] 2026-03-11 00:58:24.457411 | orchestrator | ok: [testbed-node-5] 2026-03-11 00:58:24.457416 | orchestrator | 2026-03-11 00:58:24.457421 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2026-03-11 00:58:24.457427 | orchestrator | Wednesday 11 March 2026 00:56:44 +0000 (0:00:00.303) 0:00:24.711 ******* 2026-03-11 00:58:24.457432 | orchestrator | ok: [testbed-node-3] => (item=0) 2026-03-11 00:58:24.457442 | orchestrator | ok: [testbed-node-4] => (item=0) 2026-03-11 00:58:24.457447 | orchestrator | ok: [testbed-node-5] => (item=0) 2026-03-11 00:58:24.457453 | orchestrator | 2026-03-11 00:58:24.457460 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2026-03-11 00:58:24.457466 | orchestrator | Wednesday 11 March 2026 00:56:44 +0000 (0:00:00.468) 0:00:25.179 ******* 2026-03-11 00:58:24.457472 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:58:24.457478 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:58:24.457484 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:58:24.457490 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-11 00:58:24.457496 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-11 00:58:24.457502 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-11 00:58:24.457508 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-11 00:58:24.457514 | orchestrator | 2026-03-11 00:58:24.457520 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2026-03-11 00:58:24.457529 | orchestrator | Wednesday 11 March 2026 00:56:45 +0000 (0:00:00.971) 0:00:26.151 ******* 2026-03-11 00:58:24.457535 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2026-03-11 00:58:24.457541 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2026-03-11 00:58:24.457548 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2026-03-11 00:58:24.457554 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2026-03-11 00:58:24.457560 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2026-03-11 00:58:24.457567 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2026-03-11 00:58:24.457576 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2026-03-11 00:58:24.457582 | orchestrator | 2026-03-11 00:58:24.457587 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2026-03-11 00:58:24.457594 | orchestrator | Wednesday 11 March 2026 00:56:47 +0000 (0:00:02.143) 0:00:28.295 ******* 2026-03-11 00:58:24.457650 | orchestrator | skipping: [testbed-node-3] 2026-03-11 00:58:24.457657 | orchestrator | skipping: [testbed-node-4] 2026-03-11 00:58:24.457662 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2026-03-11 00:58:24.457667 | orchestrator | 2026-03-11 00:58:24.457672 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2026-03-11 00:58:24.457678 | orchestrator | Wednesday 11 March 2026 00:56:48 +0000 (0:00:00.458) 0:00:28.753 ******* 2026-03-11 00:58:24.457685 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:58:24.457692 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:58:24.457698 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:58:24.457704 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:58:24.457715 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2026-03-11 00:58:24.457721 | orchestrator | 2026-03-11 00:58:24.457727 | orchestrator | TASK [generate keys] *********************************************************** 2026-03-11 00:58:24.457733 | orchestrator | Wednesday 11 March 2026 00:57:31 +0000 (0:00:42.835) 0:01:11.589 ******* 2026-03-11 00:58:24.457739 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457745 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457752 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457758 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457764 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457771 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457777 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2026-03-11 00:58:24.457784 | orchestrator | 2026-03-11 00:58:24.457790 | orchestrator | TASK [get keys from monitors] ************************************************** 2026-03-11 00:58:24.457796 | orchestrator | Wednesday 11 March 2026 00:57:54 +0000 (0:00:23.650) 0:01:35.240 ******* 2026-03-11 00:58:24.457802 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457809 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457815 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457821 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457827 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457833 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457842 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2026-03-11 00:58:24.457848 | orchestrator | 2026-03-11 00:58:24.457854 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2026-03-11 00:58:24.457860 | orchestrator | Wednesday 11 March 2026 00:58:06 +0000 (0:00:11.293) 0:01:46.533 ******* 2026-03-11 00:58:24.457865 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457871 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:58:24.457876 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:58:24.457882 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457887 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:58:24.457896 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:58:24.457902 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457907 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:58:24.457912 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:58:24.457918 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457923 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:58:24.457932 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:58:24.457937 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457942 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:58:24.457948 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:58:24.457953 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2026-03-11 00:58:24.457959 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2026-03-11 00:58:24.457964 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2026-03-11 00:58:24.457969 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2026-03-11 00:58:24.457974 | orchestrator | 2026-03-11 00:58:24.457980 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:58:24.457985 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2026-03-11 00:58:24.457991 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-11 00:58:24.457997 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-11 00:58:24.458002 | orchestrator | 2026-03-11 00:58:24.458007 | orchestrator | 2026-03-11 00:58:24.458047 | orchestrator | 2026-03-11 00:58:24.458055 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:58:24.458061 | orchestrator | Wednesday 11 March 2026 00:58:22 +0000 (0:00:16.318) 0:02:02.851 ******* 2026-03-11 00:58:24.458066 | orchestrator | =============================================================================== 2026-03-11 00:58:24.458072 | orchestrator | create openstack pool(s) ----------------------------------------------- 42.84s 2026-03-11 00:58:24.458077 | orchestrator | generate keys ---------------------------------------------------------- 23.65s 2026-03-11 00:58:24.458083 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 16.32s 2026-03-11 00:58:24.458088 | orchestrator | get keys from monitors ------------------------------------------------- 11.29s 2026-03-11 00:58:24.458094 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 2.14s 2026-03-11 00:58:24.458099 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 1.96s 2026-03-11 00:58:24.458105 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.76s 2026-03-11 00:58:24.458110 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.97s 2026-03-11 00:58:24.458116 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.85s 2026-03-11 00:58:24.458121 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.83s 2026-03-11 00:58:24.458126 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.72s 2026-03-11 00:58:24.458132 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.72s 2026-03-11 00:58:24.458137 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.69s 2026-03-11 00:58:24.458143 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.66s 2026-03-11 00:58:24.458148 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 0.64s 2026-03-11 00:58:24.458154 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.62s 2026-03-11 00:58:24.458159 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.59s 2026-03-11 00:58:24.458164 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.59s 2026-03-11 00:58:24.458170 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.58s 2026-03-11 00:58:24.458182 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.56s 2026-03-11 00:58:24.458188 | orchestrator | 2026-03-11 00:58:24 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:24.458192 | orchestrator | 2026-03-11 00:58:24 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:24.458390 | orchestrator | 2026-03-11 00:58:24 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state STARTED 2026-03-11 00:58:24.458425 | orchestrator | 2026-03-11 00:58:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:27.513797 | orchestrator | 2026-03-11 00:58:27 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:27.516295 | orchestrator | 2026-03-11 00:58:27 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:27.518493 | orchestrator | 2026-03-11 00:58:27 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state STARTED 2026-03-11 00:58:27.518543 | orchestrator | 2026-03-11 00:58:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:30.561047 | orchestrator | 2026-03-11 00:58:30 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:30.562938 | orchestrator | 2026-03-11 00:58:30 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:30.565804 | orchestrator | 2026-03-11 00:58:30 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state STARTED 2026-03-11 00:58:30.566619 | orchestrator | 2026-03-11 00:58:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:33.603831 | orchestrator | 2026-03-11 00:58:33 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:33.606064 | orchestrator | 2026-03-11 00:58:33 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:33.607526 | orchestrator | 2026-03-11 00:58:33 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state STARTED 2026-03-11 00:58:33.607559 | orchestrator | 2026-03-11 00:58:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:36.648612 | orchestrator | 2026-03-11 00:58:36 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:36.652213 | orchestrator | 2026-03-11 00:58:36 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:36.655546 | orchestrator | 2026-03-11 00:58:36 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state STARTED 2026-03-11 00:58:36.655981 | orchestrator | 2026-03-11 00:58:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:39.695695 | orchestrator | 2026-03-11 00:58:39 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:39.697911 | orchestrator | 2026-03-11 00:58:39 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:39.699206 | orchestrator | 2026-03-11 00:58:39 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state STARTED 2026-03-11 00:58:39.699442 | orchestrator | 2026-03-11 00:58:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:42.744775 | orchestrator | 2026-03-11 00:58:42 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:42.746082 | orchestrator | 2026-03-11 00:58:42 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:42.747821 | orchestrator | 2026-03-11 00:58:42 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state STARTED 2026-03-11 00:58:42.747876 | orchestrator | 2026-03-11 00:58:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:45.795858 | orchestrator | 2026-03-11 00:58:45 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state STARTED 2026-03-11 00:58:45.798067 | orchestrator | 2026-03-11 00:58:45 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:45.800068 | orchestrator | 2026-03-11 00:58:45 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state STARTED 2026-03-11 00:58:45.800110 | orchestrator | 2026-03-11 00:58:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:48.851062 | orchestrator | 2026-03-11 00:58:48 | INFO  | Task dfddc77b-07cf-498d-8a99-cf66f23035ad is in state SUCCESS 2026-03-11 00:58:48.852111 | orchestrator | 2026-03-11 00:58:48.852204 | orchestrator | 2026-03-11 00:58:48.852216 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 00:58:48.852224 | orchestrator | 2026-03-11 00:58:48.852230 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 00:58:48.852237 | orchestrator | Wednesday 11 March 2026 00:57:23 +0000 (0:00:00.273) 0:00:00.273 ******* 2026-03-11 00:58:48.852252 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:58:48.852261 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:58:48.852267 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:58:48.852274 | orchestrator | 2026-03-11 00:58:48.852281 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 00:58:48.852294 | orchestrator | Wednesday 11 March 2026 00:57:24 +0000 (0:00:00.295) 0:00:00.568 ******* 2026-03-11 00:58:48.852300 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2026-03-11 00:58:48.852308 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2026-03-11 00:58:48.852355 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2026-03-11 00:58:48.852363 | orchestrator | 2026-03-11 00:58:48.852370 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2026-03-11 00:58:48.852376 | orchestrator | 2026-03-11 00:58:48.852382 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-11 00:58:48.852494 | orchestrator | Wednesday 11 March 2026 00:57:24 +0000 (0:00:00.415) 0:00:00.984 ******* 2026-03-11 00:58:48.852499 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:58:48.852503 | orchestrator | 2026-03-11 00:58:48.852508 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2026-03-11 00:58:48.852511 | orchestrator | Wednesday 11 March 2026 00:57:24 +0000 (0:00:00.489) 0:00:01.474 ******* 2026-03-11 00:58:48.852519 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:58:48.852551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:58:48.852556 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:58:48.852564 | orchestrator | 2026-03-11 00:58:48.852568 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2026-03-11 00:58:48.852571 | orchestrator | Wednesday 11 March 2026 00:57:26 +0000 (0:00:01.145) 0:00:02.619 ******* 2026-03-11 00:58:48.852610 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:58:48.852621 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:58:48.852630 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:58:48.852635 | orchestrator | 2026-03-11 00:58:48.852641 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-11 00:58:48.852646 | orchestrator | Wednesday 11 March 2026 00:57:26 +0000 (0:00:00.444) 0:00:03.063 ******* 2026-03-11 00:58:48.852652 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-11 00:58:48.852664 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-11 00:58:48.852670 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2026-03-11 00:58:48.852684 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2026-03-11 00:58:48.852694 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2026-03-11 00:58:48.852700 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2026-03-11 00:58:48.852705 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2026-03-11 00:58:48.852711 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2026-03-11 00:58:48.852718 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-11 00:58:48.852745 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-11 00:58:48.852752 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2026-03-11 00:58:48.852758 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2026-03-11 00:58:48.852764 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2026-03-11 00:58:48.852770 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2026-03-11 00:58:48.852776 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2026-03-11 00:58:48.852782 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2026-03-11 00:58:48.852789 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2026-03-11 00:58:48.852795 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2026-03-11 00:58:48.852802 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2026-03-11 00:58:48.852808 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2026-03-11 00:58:48.852814 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2026-03-11 00:58:48.852826 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2026-03-11 00:58:48.852830 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2026-03-11 00:58:48.852834 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2026-03-11 00:58:48.852838 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2026-03-11 00:58:48.852843 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2026-03-11 00:58:48.852847 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2026-03-11 00:58:48.852851 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2026-03-11 00:58:48.852855 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2026-03-11 00:58:48.852859 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2026-03-11 00:58:48.852862 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2026-03-11 00:58:48.852866 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2026-03-11 00:58:48.852870 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2026-03-11 00:58:48.852875 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2026-03-11 00:58:48.852878 | orchestrator | 2026-03-11 00:58:48.852882 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:58:48.852886 | orchestrator | Wednesday 11 March 2026 00:57:27 +0000 (0:00:00.754) 0:00:03.818 ******* 2026-03-11 00:58:48.852890 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:58:48.852895 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:58:48.852901 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:58:48.852907 | orchestrator | 2026-03-11 00:58:48.852914 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:58:48.852920 | orchestrator | Wednesday 11 March 2026 00:57:27 +0000 (0:00:00.317) 0:00:04.135 ******* 2026-03-11 00:58:48.852927 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.852932 | orchestrator | 2026-03-11 00:58:48.852940 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:58:48.852944 | orchestrator | Wednesday 11 March 2026 00:57:27 +0000 (0:00:00.124) 0:00:04.259 ******* 2026-03-11 00:58:48.852948 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.852951 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.852955 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.852959 | orchestrator | 2026-03-11 00:58:48.852965 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:58:48.852969 | orchestrator | Wednesday 11 March 2026 00:57:28 +0000 (0:00:00.392) 0:00:04.651 ******* 2026-03-11 00:58:48.852973 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:58:48.852977 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:58:48.852980 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:58:48.852984 | orchestrator | 2026-03-11 00:58:48.852988 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:58:48.852992 | orchestrator | Wednesday 11 March 2026 00:57:28 +0000 (0:00:00.265) 0:00:04.916 ******* 2026-03-11 00:58:48.852999 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853002 | orchestrator | 2026-03-11 00:58:48.853006 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:58:48.853010 | orchestrator | Wednesday 11 March 2026 00:57:28 +0000 (0:00:00.112) 0:00:05.029 ******* 2026-03-11 00:58:48.853014 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853017 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.853021 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.853025 | orchestrator | 2026-03-11 00:58:48.853028 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:58:48.853032 | orchestrator | Wednesday 11 March 2026 00:57:28 +0000 (0:00:00.238) 0:00:05.268 ******* 2026-03-11 00:58:48.853036 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:58:48.853040 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:58:48.853044 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:58:48.853047 | orchestrator | 2026-03-11 00:58:48.853051 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:58:48.853055 | orchestrator | Wednesday 11 March 2026 00:57:29 +0000 (0:00:00.264) 0:00:05.532 ******* 2026-03-11 00:58:48.853058 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853062 | orchestrator | 2026-03-11 00:58:48.853066 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:58:48.853070 | orchestrator | Wednesday 11 March 2026 00:57:29 +0000 (0:00:00.223) 0:00:05.756 ******* 2026-03-11 00:58:48.853073 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853077 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.853081 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.853084 | orchestrator | 2026-03-11 00:58:48.853088 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:58:48.853092 | orchestrator | Wednesday 11 March 2026 00:57:29 +0000 (0:00:00.233) 0:00:05.989 ******* 2026-03-11 00:58:48.853096 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:58:48.853099 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:58:48.853103 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:58:48.853107 | orchestrator | 2026-03-11 00:58:48.853111 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:58:48.853114 | orchestrator | Wednesday 11 March 2026 00:57:29 +0000 (0:00:00.255) 0:00:06.245 ******* 2026-03-11 00:58:48.853118 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853122 | orchestrator | 2026-03-11 00:58:48.853125 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:58:48.853129 | orchestrator | Wednesday 11 March 2026 00:57:29 +0000 (0:00:00.109) 0:00:06.354 ******* 2026-03-11 00:58:48.853133 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853137 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.853140 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.853144 | orchestrator | 2026-03-11 00:58:48.853148 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:58:48.853151 | orchestrator | Wednesday 11 March 2026 00:57:30 +0000 (0:00:00.243) 0:00:06.597 ******* 2026-03-11 00:58:48.853155 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:58:48.853159 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:58:48.853163 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:58:48.853166 | orchestrator | 2026-03-11 00:58:48.853170 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:58:48.853174 | orchestrator | Wednesday 11 March 2026 00:57:30 +0000 (0:00:00.393) 0:00:06.990 ******* 2026-03-11 00:58:48.853177 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853181 | orchestrator | 2026-03-11 00:58:48.853185 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:58:48.853189 | orchestrator | Wednesday 11 March 2026 00:57:30 +0000 (0:00:00.112) 0:00:07.103 ******* 2026-03-11 00:58:48.853193 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853196 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.853203 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.853207 | orchestrator | 2026-03-11 00:58:48.853210 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:58:48.853214 | orchestrator | Wednesday 11 March 2026 00:57:30 +0000 (0:00:00.281) 0:00:07.385 ******* 2026-03-11 00:58:48.853218 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:58:48.853221 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:58:48.853226 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:58:48.853229 | orchestrator | 2026-03-11 00:58:48.853233 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:58:48.853239 | orchestrator | Wednesday 11 March 2026 00:57:31 +0000 (0:00:00.264) 0:00:07.649 ******* 2026-03-11 00:58:48.853247 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853256 | orchestrator | 2026-03-11 00:58:48.853262 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:58:48.853268 | orchestrator | Wednesday 11 March 2026 00:57:31 +0000 (0:00:00.107) 0:00:07.756 ******* 2026-03-11 00:58:48.853274 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853280 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.853287 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.853293 | orchestrator | 2026-03-11 00:58:48.853300 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:58:48.853309 | orchestrator | Wednesday 11 March 2026 00:57:31 +0000 (0:00:00.256) 0:00:08.012 ******* 2026-03-11 00:58:48.853313 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:58:48.853316 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:58:48.853320 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:58:48.853324 | orchestrator | 2026-03-11 00:58:48.853328 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:58:48.853335 | orchestrator | Wednesday 11 March 2026 00:57:31 +0000 (0:00:00.406) 0:00:08.419 ******* 2026-03-11 00:58:48.853339 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853343 | orchestrator | 2026-03-11 00:58:48.853347 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:58:48.853351 | orchestrator | Wednesday 11 March 2026 00:57:32 +0000 (0:00:00.110) 0:00:08.529 ******* 2026-03-11 00:58:48.853354 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853358 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.853362 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.853365 | orchestrator | 2026-03-11 00:58:48.853369 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:58:48.853373 | orchestrator | Wednesday 11 March 2026 00:57:32 +0000 (0:00:00.266) 0:00:08.796 ******* 2026-03-11 00:58:48.853376 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:58:48.853382 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:58:48.853388 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:58:48.853397 | orchestrator | 2026-03-11 00:58:48.853404 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:58:48.853410 | orchestrator | Wednesday 11 March 2026 00:57:32 +0000 (0:00:00.309) 0:00:09.105 ******* 2026-03-11 00:58:48.853416 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853422 | orchestrator | 2026-03-11 00:58:48.853428 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:58:48.853434 | orchestrator | Wednesday 11 March 2026 00:57:32 +0000 (0:00:00.110) 0:00:09.216 ******* 2026-03-11 00:58:48.853440 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853446 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.853453 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.853459 | orchestrator | 2026-03-11 00:58:48.853465 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:58:48.853471 | orchestrator | Wednesday 11 March 2026 00:57:33 +0000 (0:00:00.380) 0:00:09.597 ******* 2026-03-11 00:58:48.853478 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:58:48.853484 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:58:48.853497 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:58:48.853503 | orchestrator | 2026-03-11 00:58:48.853508 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:58:48.853514 | orchestrator | Wednesday 11 March 2026 00:57:33 +0000 (0:00:00.299) 0:00:09.897 ******* 2026-03-11 00:58:48.853520 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853526 | orchestrator | 2026-03-11 00:58:48.853532 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:58:48.853539 | orchestrator | Wednesday 11 March 2026 00:57:33 +0000 (0:00:00.108) 0:00:10.005 ******* 2026-03-11 00:58:48.853545 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853552 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.853559 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.853566 | orchestrator | 2026-03-11 00:58:48.853572 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2026-03-11 00:58:48.853629 | orchestrator | Wednesday 11 March 2026 00:57:33 +0000 (0:00:00.259) 0:00:10.264 ******* 2026-03-11 00:58:48.853637 | orchestrator | ok: [testbed-node-0] 2026-03-11 00:58:48.853643 | orchestrator | ok: [testbed-node-1] 2026-03-11 00:58:48.853650 | orchestrator | ok: [testbed-node-2] 2026-03-11 00:58:48.853656 | orchestrator | 2026-03-11 00:58:48.853662 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2026-03-11 00:58:48.853668 | orchestrator | Wednesday 11 March 2026 00:57:34 +0000 (0:00:00.287) 0:00:10.552 ******* 2026-03-11 00:58:48.853675 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853681 | orchestrator | 2026-03-11 00:58:48.853686 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2026-03-11 00:58:48.853690 | orchestrator | Wednesday 11 March 2026 00:57:34 +0000 (0:00:00.122) 0:00:10.675 ******* 2026-03-11 00:58:48.853694 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853698 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.853701 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.853705 | orchestrator | 2026-03-11 00:58:48.853709 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2026-03-11 00:58:48.853713 | orchestrator | Wednesday 11 March 2026 00:57:34 +0000 (0:00:00.365) 0:00:11.040 ******* 2026-03-11 00:58:48.853716 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:58:48.853720 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:58:48.853724 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:58:48.853728 | orchestrator | 2026-03-11 00:58:48.853731 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2026-03-11 00:58:48.853735 | orchestrator | Wednesday 11 March 2026 00:57:36 +0000 (0:00:01.637) 0:00:12.678 ******* 2026-03-11 00:58:48.853739 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-11 00:58:48.853743 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-11 00:58:48.853747 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2026-03-11 00:58:48.853750 | orchestrator | 2026-03-11 00:58:48.853754 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2026-03-11 00:58:48.853758 | orchestrator | Wednesday 11 March 2026 00:57:37 +0000 (0:00:01.337) 0:00:14.016 ******* 2026-03-11 00:58:48.853762 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-11 00:58:48.853766 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-11 00:58:48.853770 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2026-03-11 00:58:48.853774 | orchestrator | 2026-03-11 00:58:48.853777 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2026-03-11 00:58:48.853787 | orchestrator | Wednesday 11 March 2026 00:57:39 +0000 (0:00:02.402) 0:00:16.418 ******* 2026-03-11 00:58:48.853791 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-11 00:58:48.853802 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-11 00:58:48.853806 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2026-03-11 00:58:48.853810 | orchestrator | 2026-03-11 00:58:48.853814 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2026-03-11 00:58:48.853817 | orchestrator | Wednesday 11 March 2026 00:57:42 +0000 (0:00:02.394) 0:00:18.813 ******* 2026-03-11 00:58:48.853821 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853825 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.853829 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.853832 | orchestrator | 2026-03-11 00:58:48.853836 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2026-03-11 00:58:48.853840 | orchestrator | Wednesday 11 March 2026 00:57:42 +0000 (0:00:00.292) 0:00:19.106 ******* 2026-03-11 00:58:48.853844 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853847 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.853851 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.853855 | orchestrator | 2026-03-11 00:58:48.853859 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-11 00:58:48.853862 | orchestrator | Wednesday 11 March 2026 00:57:42 +0000 (0:00:00.268) 0:00:19.375 ******* 2026-03-11 00:58:48.853866 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:58:48.853870 | orchestrator | 2026-03-11 00:58:48.853874 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2026-03-11 00:58:48.853877 | orchestrator | Wednesday 11 March 2026 00:57:43 +0000 (0:00:00.830) 0:00:20.205 ******* 2026-03-11 00:58:48.853882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:58:48.853898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:58:48.853905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:58:48.853912 | orchestrator | 2026-03-11 00:58:48.853916 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2026-03-11 00:58:48.853920 | orchestrator | Wednesday 11 March 2026 00:57:45 +0000 (0:00:01.434) 0:00:21.640 ******* 2026-03-11 00:58:48.853929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:58:48.853934 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:58:48.853947 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.853953 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:58:48.853958 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.853962 | orchestrator | 2026-03-11 00:58:48.853965 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2026-03-11 00:58:48.853969 | orchestrator | Wednesday 11 March 2026 00:57:45 +0000 (0:00:00.654) 0:00:22.294 ******* 2026-03-11 00:58:48.853978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:58:48.853985 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.853989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:58:48.853993 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.854002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2026-03-11 00:58:48.854009 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.854041 | orchestrator | 2026-03-11 00:58:48.854047 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2026-03-11 00:58:48.854051 | orchestrator | Wednesday 11 March 2026 00:57:46 +0000 (0:00:00.930) 0:00:23.225 ******* 2026-03-11 00:58:48.854055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:58:48.854068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:58:48.854073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2026-03-11 00:58:48.854080 | orchestrator | 2026-03-11 00:58:48.854084 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-11 00:58:48.854088 | orchestrator | Wednesday 11 March 2026 00:57:48 +0000 (0:00:01.388) 0:00:24.613 ******* 2026-03-11 00:58:48.854092 | orchestrator | skipping: [testbed-node-0] 2026-03-11 00:58:48.854096 | orchestrator | skipping: [testbed-node-1] 2026-03-11 00:58:48.854099 | orchestrator | skipping: [testbed-node-2] 2026-03-11 00:58:48.854103 | orchestrator | 2026-03-11 00:58:48.854107 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2026-03-11 00:58:48.854110 | orchestrator | Wednesday 11 March 2026 00:57:48 +0000 (0:00:00.287) 0:00:24.900 ******* 2026-03-11 00:58:48.854114 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 00:58:48.854118 | orchestrator | 2026-03-11 00:58:48.854121 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2026-03-11 00:58:48.854127 | orchestrator | Wednesday 11 March 2026 00:57:48 +0000 (0:00:00.492) 0:00:25.392 ******* 2026-03-11 00:58:48.854131 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:58:48.854141 | orchestrator | 2026-03-11 00:58:48.854149 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2026-03-11 00:58:48.854153 | orchestrator | Wednesday 11 March 2026 00:57:51 +0000 (0:00:02.350) 0:00:27.743 ******* 2026-03-11 00:58:48.854159 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:58:48.854163 | orchestrator | 2026-03-11 00:58:48.854167 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2026-03-11 00:58:48.854170 | orchestrator | Wednesday 11 March 2026 00:57:54 +0000 (0:00:03.032) 0:00:30.775 ******* 2026-03-11 00:58:48.854174 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:58:48.854178 | orchestrator | 2026-03-11 00:58:48.854182 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-11 00:58:48.854186 | orchestrator | Wednesday 11 March 2026 00:58:08 +0000 (0:00:14.542) 0:00:45.317 ******* 2026-03-11 00:58:48.854189 | orchestrator | 2026-03-11 00:58:48.854193 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-11 00:58:48.854197 | orchestrator | Wednesday 11 March 2026 00:58:08 +0000 (0:00:00.069) 0:00:45.387 ******* 2026-03-11 00:58:48.854201 | orchestrator | 2026-03-11 00:58:48.854204 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2026-03-11 00:58:48.854208 | orchestrator | Wednesday 11 March 2026 00:58:08 +0000 (0:00:00.075) 0:00:45.462 ******* 2026-03-11 00:58:48.854212 | orchestrator | 2026-03-11 00:58:48.854216 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2026-03-11 00:58:48.854219 | orchestrator | Wednesday 11 March 2026 00:58:09 +0000 (0:00:00.067) 0:00:45.530 ******* 2026-03-11 00:58:48.854223 | orchestrator | changed: [testbed-node-0] 2026-03-11 00:58:48.854227 | orchestrator | changed: [testbed-node-2] 2026-03-11 00:58:48.854233 | orchestrator | changed: [testbed-node-1] 2026-03-11 00:58:48.854239 | orchestrator | 2026-03-11 00:58:48.854249 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 00:58:48.854255 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2026-03-11 00:58:48.854262 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-11 00:58:48.854268 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2026-03-11 00:58:48.854278 | orchestrator | 2026-03-11 00:58:48.854284 | orchestrator | 2026-03-11 00:58:48.854291 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 00:58:48.854298 | orchestrator | Wednesday 11 March 2026 00:58:46 +0000 (0:00:37.238) 0:01:22.769 ******* 2026-03-11 00:58:48.854304 | orchestrator | =============================================================================== 2026-03-11 00:58:48.854310 | orchestrator | horizon : Restart horizon container ------------------------------------ 37.24s 2026-03-11 00:58:48.854319 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 14.54s 2026-03-11 00:58:48.854327 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 3.03s 2026-03-11 00:58:48.854333 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.40s 2026-03-11 00:58:48.854339 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.39s 2026-03-11 00:58:48.854345 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.35s 2026-03-11 00:58:48.854350 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.64s 2026-03-11 00:58:48.854356 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.43s 2026-03-11 00:58:48.854362 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.39s 2026-03-11 00:58:48.854369 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.34s 2026-03-11 00:58:48.854376 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.15s 2026-03-11 00:58:48.854382 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.93s 2026-03-11 00:58:48.854389 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.83s 2026-03-11 00:58:48.854394 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2026-03-11 00:58:48.854398 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.65s 2026-03-11 00:58:48.854402 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.49s 2026-03-11 00:58:48.854405 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.49s 2026-03-11 00:58:48.854409 | orchestrator | horizon : Set empty custom policy --------------------------------------- 0.44s 2026-03-11 00:58:48.854413 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.42s 2026-03-11 00:58:48.854417 | orchestrator | horizon : Update policy file name --------------------------------------- 0.41s 2026-03-11 00:58:48.854420 | orchestrator | 2026-03-11 00:58:48 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:48.855122 | orchestrator | 2026-03-11 00:58:48 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state STARTED 2026-03-11 00:58:48.855148 | orchestrator | 2026-03-11 00:58:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:51.893669 | orchestrator | 2026-03-11 00:58:51 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:51.896177 | orchestrator | 2026-03-11 00:58:51 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state STARTED 2026-03-11 00:58:51.896775 | orchestrator | 2026-03-11 00:58:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:54.943021 | orchestrator | 2026-03-11 00:58:54 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:54.944293 | orchestrator | 2026-03-11 00:58:54 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state STARTED 2026-03-11 00:58:54.944603 | orchestrator | 2026-03-11 00:58:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:58:57.991169 | orchestrator | 2026-03-11 00:58:57 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:58:57.991409 | orchestrator | 2026-03-11 00:58:57 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state STARTED 2026-03-11 00:58:57.992014 | orchestrator | 2026-03-11 00:58:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:01.031944 | orchestrator | 2026-03-11 00:59:01 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:01.039890 | orchestrator | 2026-03-11 00:59:01 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state STARTED 2026-03-11 00:59:01.039983 | orchestrator | 2026-03-11 00:59:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:04.100126 | orchestrator | 2026-03-11 00:59:04 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:04.101532 | orchestrator | 2026-03-11 00:59:04 | INFO  | Task 4e9a5f77-2ce2-4c67-abd2-6d1d1551b0ae is in state SUCCESS 2026-03-11 00:59:04.103258 | orchestrator | 2026-03-11 00:59:04 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:04.105227 | orchestrator | 2026-03-11 00:59:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:07.143872 | orchestrator | 2026-03-11 00:59:07 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:07.144945 | orchestrator | 2026-03-11 00:59:07 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:07.144993 | orchestrator | 2026-03-11 00:59:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:10.187733 | orchestrator | 2026-03-11 00:59:10 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:10.189081 | orchestrator | 2026-03-11 00:59:10 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:10.189127 | orchestrator | 2026-03-11 00:59:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:13.234219 | orchestrator | 2026-03-11 00:59:13 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:13.235856 | orchestrator | 2026-03-11 00:59:13 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:13.235919 | orchestrator | 2026-03-11 00:59:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:16.267767 | orchestrator | 2026-03-11 00:59:16 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:16.268439 | orchestrator | 2026-03-11 00:59:16 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:16.268470 | orchestrator | 2026-03-11 00:59:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:19.308020 | orchestrator | 2026-03-11 00:59:19 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:19.308683 | orchestrator | 2026-03-11 00:59:19 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:19.308732 | orchestrator | 2026-03-11 00:59:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:22.352472 | orchestrator | 2026-03-11 00:59:22 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:22.353410 | orchestrator | 2026-03-11 00:59:22 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:22.353452 | orchestrator | 2026-03-11 00:59:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:25.398519 | orchestrator | 2026-03-11 00:59:25 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:25.398718 | orchestrator | 2026-03-11 00:59:25 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:25.398769 | orchestrator | 2026-03-11 00:59:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:28.447612 | orchestrator | 2026-03-11 00:59:28 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:28.448007 | orchestrator | 2026-03-11 00:59:28 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:28.448047 | orchestrator | 2026-03-11 00:59:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:31.483690 | orchestrator | 2026-03-11 00:59:31 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:31.484402 | orchestrator | 2026-03-11 00:59:31 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:31.484427 | orchestrator | 2026-03-11 00:59:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:34.524996 | orchestrator | 2026-03-11 00:59:34 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:34.526838 | orchestrator | 2026-03-11 00:59:34 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:34.526900 | orchestrator | 2026-03-11 00:59:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:37.572472 | orchestrator | 2026-03-11 00:59:37 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:37.575091 | orchestrator | 2026-03-11 00:59:37 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:37.575147 | orchestrator | 2026-03-11 00:59:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:40.621809 | orchestrator | 2026-03-11 00:59:40 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:40.623727 | orchestrator | 2026-03-11 00:59:40 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:40.623832 | orchestrator | 2026-03-11 00:59:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:43.674728 | orchestrator | 2026-03-11 00:59:43 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:43.677269 | orchestrator | 2026-03-11 00:59:43 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:43.677321 | orchestrator | 2026-03-11 00:59:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:46.724419 | orchestrator | 2026-03-11 00:59:46 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:46.726486 | orchestrator | 2026-03-11 00:59:46 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:46.727049 | orchestrator | 2026-03-11 00:59:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:49.773571 | orchestrator | 2026-03-11 00:59:49 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:49.778065 | orchestrator | 2026-03-11 00:59:49 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:49.778152 | orchestrator | 2026-03-11 00:59:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:52.820036 | orchestrator | 2026-03-11 00:59:52 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:52.820184 | orchestrator | 2026-03-11 00:59:52 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:52.820197 | orchestrator | 2026-03-11 00:59:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:55.865402 | orchestrator | 2026-03-11 00:59:55 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:55.867247 | orchestrator | 2026-03-11 00:59:55 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:55.867314 | orchestrator | 2026-03-11 00:59:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 00:59:58.902419 | orchestrator | 2026-03-11 00:59:58 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 00:59:58.903617 | orchestrator | 2026-03-11 00:59:58 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state STARTED 2026-03-11 00:59:58.903688 | orchestrator | 2026-03-11 00:59:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:01.935805 | orchestrator | 2026-03-11 01:00:01 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:01.936841 | orchestrator | 2026-03-11 01:00:01 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:01.936886 | orchestrator | 2026-03-11 01:00:01 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state STARTED 2026-03-11 01:00:01.937626 | orchestrator | 2026-03-11 01:00:01 | INFO  | Task 87a80170-617a-48a4-9d1f-e57e96781bae is in state STARTED 2026-03-11 01:00:01.938789 | orchestrator | 2026-03-11 01:00:01 | INFO  | Task 47c882bb-417f-40fb-b1a0-d5e4f5971015 is in state SUCCESS 2026-03-11 01:00:01.939046 | orchestrator | 2026-03-11 01:00:01.939066 | orchestrator | 2026-03-11 01:00:01.939073 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2026-03-11 01:00:01.939080 | orchestrator | 2026-03-11 01:00:01.939096 | orchestrator | TASK [Check if ceph keys exist] ************************************************ 2026-03-11 01:00:01.939103 | orchestrator | Wednesday 11 March 2026 00:58:27 +0000 (0:00:00.171) 0:00:00.171 ******* 2026-03-11 01:00:01.939109 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-11 01:00:01.939117 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:01.939123 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:01.939130 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:00:01.939136 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:01.939143 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-11 01:00:01.939149 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-11 01:00:01.939156 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-11 01:00:01.939162 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-11 01:00:01.939168 | orchestrator | 2026-03-11 01:00:01.939175 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2026-03-11 01:00:01.939182 | orchestrator | Wednesday 11 March 2026 00:58:31 +0000 (0:00:04.226) 0:00:04.397 ******* 2026-03-11 01:00:01.939188 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2026-03-11 01:00:01.939195 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:01.939202 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:01.939208 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:00:01.939215 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:01.939221 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2026-03-11 01:00:01.939241 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2026-03-11 01:00:01.939248 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2026-03-11 01:00:01.939255 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2026-03-11 01:00:01.939262 | orchestrator | 2026-03-11 01:00:01.939268 | orchestrator | TASK [Create share directory] ************************************************** 2026-03-11 01:00:01.939275 | orchestrator | Wednesday 11 March 2026 00:58:35 +0000 (0:00:04.061) 0:00:08.458 ******* 2026-03-11 01:00:01.939282 | orchestrator | changed: [testbed-manager -> localhost] 2026-03-11 01:00:01.939289 | orchestrator | 2026-03-11 01:00:01.939295 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2026-03-11 01:00:01.939314 | orchestrator | Wednesday 11 March 2026 00:58:36 +0000 (0:00:01.052) 0:00:09.511 ******* 2026-03-11 01:00:01.939327 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2026-03-11 01:00:01.939333 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:01.939340 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:01.939346 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:00:01.939352 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:01.939359 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2026-03-11 01:00:01.939366 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2026-03-11 01:00:01.939373 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2026-03-11 01:00:01.939379 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2026-03-11 01:00:01.939386 | orchestrator | 2026-03-11 01:00:01.939392 | orchestrator | TASK [Check if target directories exist] *************************************** 2026-03-11 01:00:01.939399 | orchestrator | Wednesday 11 March 2026 00:58:51 +0000 (0:00:14.095) 0:00:23.606 ******* 2026-03-11 01:00:01.939406 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/infrastructure/files/ceph) 2026-03-11 01:00:01.939413 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume) 2026-03-11 01:00:01.939419 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-11 01:00:01.939426 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup) 2026-03-11 01:00:01.939441 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-11 01:00:01.939453 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/nova) 2026-03-11 01:00:01.939459 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/glance) 2026-03-11 01:00:01.939465 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/gnocchi) 2026-03-11 01:00:01.939472 | orchestrator | ok: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/manila) 2026-03-11 01:00:01.939479 | orchestrator | 2026-03-11 01:00:01.939486 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2026-03-11 01:00:01.939492 | orchestrator | Wednesday 11 March 2026 00:58:54 +0000 (0:00:03.073) 0:00:26.680 ******* 2026-03-11 01:00:01.939535 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2026-03-11 01:00:01.939543 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:01.939550 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:01.939565 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:00:01.939572 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2026-03-11 01:00:01.939578 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2026-03-11 01:00:01.939585 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2026-03-11 01:00:01.939592 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2026-03-11 01:00:01.939598 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2026-03-11 01:00:01.939605 | orchestrator | 2026-03-11 01:00:01.939612 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:00:01.939619 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:00:01.939626 | orchestrator | 2026-03-11 01:00:01.939632 | orchestrator | 2026-03-11 01:00:01.939639 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:00:01.939645 | orchestrator | Wednesday 11 March 2026 00:59:01 +0000 (0:00:06.897) 0:00:33.577 ******* 2026-03-11 01:00:01.939652 | orchestrator | =============================================================================== 2026-03-11 01:00:01.939659 | orchestrator | Write ceph keys to the share directory --------------------------------- 14.10s 2026-03-11 01:00:01.939665 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.90s 2026-03-11 01:00:01.939672 | orchestrator | Check if ceph keys exist ------------------------------------------------ 4.23s 2026-03-11 01:00:01.939680 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.06s 2026-03-11 01:00:01.939687 | orchestrator | Check if target directories exist --------------------------------------- 3.07s 2026-03-11 01:00:01.939694 | orchestrator | Create share directory -------------------------------------------------- 1.05s 2026-03-11 01:00:01.939701 | orchestrator | 2026-03-11 01:00:01.939709 | orchestrator | 2026-03-11 01:00:01.939715 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2026-03-11 01:00:01.939722 | orchestrator | 2026-03-11 01:00:01.939729 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2026-03-11 01:00:01.939735 | orchestrator | Wednesday 11 March 2026 00:59:06 +0000 (0:00:00.253) 0:00:00.254 ******* 2026-03-11 01:00:01.939742 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2026-03-11 01:00:01.939749 | orchestrator | 2026-03-11 01:00:01.939755 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2026-03-11 01:00:01.939761 | orchestrator | Wednesday 11 March 2026 00:59:06 +0000 (0:00:00.240) 0:00:00.494 ******* 2026-03-11 01:00:01.939767 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2026-03-11 01:00:01.939773 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2026-03-11 01:00:01.939779 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2026-03-11 01:00:01.939785 | orchestrator | 2026-03-11 01:00:01.939791 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2026-03-11 01:00:01.939797 | orchestrator | Wednesday 11 March 2026 00:59:07 +0000 (0:00:01.252) 0:00:01.746 ******* 2026-03-11 01:00:01.939804 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2026-03-11 01:00:01.939811 | orchestrator | 2026-03-11 01:00:01.939817 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2026-03-11 01:00:01.939824 | orchestrator | Wednesday 11 March 2026 00:59:08 +0000 (0:00:01.438) 0:00:03.184 ******* 2026-03-11 01:00:01.939831 | orchestrator | changed: [testbed-manager] 2026-03-11 01:00:01.939838 | orchestrator | 2026-03-11 01:00:01.939845 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2026-03-11 01:00:01.939852 | orchestrator | Wednesday 11 March 2026 00:59:09 +0000 (0:00:00.913) 0:00:04.098 ******* 2026-03-11 01:00:01.939864 | orchestrator | changed: [testbed-manager] 2026-03-11 01:00:01.939872 | orchestrator | 2026-03-11 01:00:01.939879 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2026-03-11 01:00:01.939886 | orchestrator | Wednesday 11 March 2026 00:59:10 +0000 (0:00:01.015) 0:00:05.113 ******* 2026-03-11 01:00:01.939894 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2026-03-11 01:00:01.939901 | orchestrator | ok: [testbed-manager] 2026-03-11 01:00:01.939909 | orchestrator | 2026-03-11 01:00:01.939916 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2026-03-11 01:00:01.939931 | orchestrator | Wednesday 11 March 2026 00:59:51 +0000 (0:00:40.451) 0:00:45.564 ******* 2026-03-11 01:00:01.939943 | orchestrator | changed: [testbed-manager] => (item=ceph) 2026-03-11 01:00:01.939951 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2026-03-11 01:00:01.939959 | orchestrator | changed: [testbed-manager] => (item=rados) 2026-03-11 01:00:01.939965 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2026-03-11 01:00:01.939972 | orchestrator | changed: [testbed-manager] => (item=rbd) 2026-03-11 01:00:01.939979 | orchestrator | 2026-03-11 01:00:01.939986 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2026-03-11 01:00:01.939993 | orchestrator | Wednesday 11 March 2026 00:59:54 +0000 (0:00:03.640) 0:00:49.204 ******* 2026-03-11 01:00:01.940000 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2026-03-11 01:00:01.940006 | orchestrator | 2026-03-11 01:00:01.940013 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2026-03-11 01:00:01.940020 | orchestrator | Wednesday 11 March 2026 00:59:55 +0000 (0:00:00.391) 0:00:49.596 ******* 2026-03-11 01:00:01.940027 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:00:01.940033 | orchestrator | 2026-03-11 01:00:01.940040 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2026-03-11 01:00:01.940047 | orchestrator | Wednesday 11 March 2026 00:59:55 +0000 (0:00:00.127) 0:00:49.723 ******* 2026-03-11 01:00:01.940054 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:00:01.940061 | orchestrator | 2026-03-11 01:00:01.940068 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2026-03-11 01:00:01.940074 | orchestrator | Wednesday 11 March 2026 00:59:55 +0000 (0:00:00.397) 0:00:50.121 ******* 2026-03-11 01:00:01.940081 | orchestrator | changed: [testbed-manager] 2026-03-11 01:00:01.940088 | orchestrator | 2026-03-11 01:00:01.940095 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2026-03-11 01:00:01.940101 | orchestrator | Wednesday 11 March 2026 00:59:57 +0000 (0:00:01.270) 0:00:51.391 ******* 2026-03-11 01:00:01.940108 | orchestrator | changed: [testbed-manager] 2026-03-11 01:00:01.940115 | orchestrator | 2026-03-11 01:00:01.940122 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2026-03-11 01:00:01.940129 | orchestrator | Wednesday 11 March 2026 00:59:57 +0000 (0:00:00.640) 0:00:52.032 ******* 2026-03-11 01:00:01.940136 | orchestrator | changed: [testbed-manager] 2026-03-11 01:00:01.940143 | orchestrator | 2026-03-11 01:00:01.940150 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2026-03-11 01:00:01.940157 | orchestrator | Wednesday 11 March 2026 00:59:58 +0000 (0:00:00.508) 0:00:52.540 ******* 2026-03-11 01:00:01.940163 | orchestrator | ok: [testbed-manager] => (item=ceph) 2026-03-11 01:00:01.940171 | orchestrator | ok: [testbed-manager] => (item=rados) 2026-03-11 01:00:01.940177 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2026-03-11 01:00:01.940184 | orchestrator | ok: [testbed-manager] => (item=rbd) 2026-03-11 01:00:01.940191 | orchestrator | 2026-03-11 01:00:01.940198 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:00:01.940205 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2026-03-11 01:00:01.940212 | orchestrator | 2026-03-11 01:00:01.940219 | orchestrator | 2026-03-11 01:00:01.940229 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:00:01.940237 | orchestrator | Wednesday 11 March 2026 00:59:59 +0000 (0:00:01.329) 0:00:53.870 ******* 2026-03-11 01:00:01.940244 | orchestrator | =============================================================================== 2026-03-11 01:00:01.940250 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 40.45s 2026-03-11 01:00:01.940257 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.64s 2026-03-11 01:00:01.940264 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.44s 2026-03-11 01:00:01.940271 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.33s 2026-03-11 01:00:01.940278 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.27s 2026-03-11 01:00:01.940285 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.25s 2026-03-11 01:00:01.940292 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.02s 2026-03-11 01:00:01.940298 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.91s 2026-03-11 01:00:01.940305 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.64s 2026-03-11 01:00:01.940312 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.51s 2026-03-11 01:00:01.940319 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.40s 2026-03-11 01:00:01.940326 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.39s 2026-03-11 01:00:01.940333 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.24s 2026-03-11 01:00:01.940339 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.13s 2026-03-11 01:00:01.940346 | orchestrator | 2026-03-11 01:00:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:04.973262 | orchestrator | 2026-03-11 01:00:04 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:04.976571 | orchestrator | 2026-03-11 01:00:04 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:04.980257 | orchestrator | 2026-03-11 01:00:04 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:04.982440 | orchestrator | 2026-03-11 01:00:04 | INFO  | Task 8c1360c6-cc91-4265-bd0d-765e2e84b00a is in state SUCCESS 2026-03-11 01:00:04.983777 | orchestrator | 2026-03-11 01:00:04.983823 | orchestrator | 2026-03-11 01:00:04.983831 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:00:04.983838 | orchestrator | 2026-03-11 01:00:04.983856 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:00:04.983861 | orchestrator | Wednesday 11 March 2026 00:57:23 +0000 (0:00:00.258) 0:00:00.258 ******* 2026-03-11 01:00:04.983867 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:00:04.983874 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:00:04.983879 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:00:04.983885 | orchestrator | 2026-03-11 01:00:04.983893 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:00:04.983899 | orchestrator | Wednesday 11 March 2026 00:57:24 +0000 (0:00:00.285) 0:00:00.543 ******* 2026-03-11 01:00:04.983905 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-11 01:00:04.983912 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-11 01:00:04.983918 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-11 01:00:04.983924 | orchestrator | 2026-03-11 01:00:04.983930 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2026-03-11 01:00:04.983936 | orchestrator | 2026-03-11 01:00:04.983942 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-11 01:00:04.983949 | orchestrator | Wednesday 11 March 2026 00:57:24 +0000 (0:00:00.444) 0:00:00.988 ******* 2026-03-11 01:00:04.983968 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:00:04.983973 | orchestrator | 2026-03-11 01:00:04.983977 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2026-03-11 01:00:04.983980 | orchestrator | Wednesday 11 March 2026 00:57:25 +0000 (0:00:00.531) 0:00:01.519 ******* 2026-03-11 01:00:04.983987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.984151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.984176 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.984185 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984198 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984218 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984232 | orchestrator | 2026-03-11 01:00:04.984310 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2026-03-11 01:00:04.984320 | orchestrator | Wednesday 11 March 2026 00:57:27 +0000 (0:00:01.961) 0:00:03.480 ******* 2026-03-11 01:00:04.984327 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.984333 | orchestrator | 2026-03-11 01:00:04.984343 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2026-03-11 01:00:04.984353 | orchestrator | Wednesday 11 March 2026 00:57:27 +0000 (0:00:00.141) 0:00:03.622 ******* 2026-03-11 01:00:04.984360 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.984365 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:04.984371 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:04.984377 | orchestrator | 2026-03-11 01:00:04.984384 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2026-03-11 01:00:04.984396 | orchestrator | Wednesday 11 March 2026 00:57:27 +0000 (0:00:00.443) 0:00:04.066 ******* 2026-03-11 01:00:04.984402 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:00:04.984408 | orchestrator | 2026-03-11 01:00:04.984414 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-11 01:00:04.984420 | orchestrator | Wednesday 11 March 2026 00:57:28 +0000 (0:00:00.826) 0:00:04.892 ******* 2026-03-11 01:00:04.984427 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:00:04.984433 | orchestrator | 2026-03-11 01:00:04.984440 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2026-03-11 01:00:04.984446 | orchestrator | Wednesday 11 March 2026 00:57:28 +0000 (0:00:00.448) 0:00:05.340 ******* 2026-03-11 01:00:04.984452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.984458 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.984465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.984482 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984516 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984530 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984543 | orchestrator | 2026-03-11 01:00:04.984548 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2026-03-11 01:00:04.984552 | orchestrator | Wednesday 11 March 2026 00:57:32 +0000 (0:00:03.402) 0:00:08.742 ******* 2026-03-11 01:00:04.984567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:04.984575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:04.984579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:04.984583 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.984587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:04.984591 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:04.984595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:04.984602 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:04.984610 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:04.984615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:04.984619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:04.984623 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:04.984627 | orchestrator | 2026-03-11 01:00:04.984631 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2026-03-11 01:00:04.984635 | orchestrator | Wednesday 11 March 2026 00:57:32 +0000 (0:00:00.513) 0:00:09.256 ******* 2026-03-11 01:00:04.984639 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:04.984645 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:04.984654 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:04.984658 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.984662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:04.984666 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:04.984670 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:04.984674 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:04.984678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:04.984689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:04.984694 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:04.984698 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:04.984701 | orchestrator | 2026-03-11 01:00:04.984705 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2026-03-11 01:00:04.984709 | orchestrator | Wednesday 11 March 2026 00:57:33 +0000 (0:00:00.690) 0:00:09.947 ******* 2026-03-11 01:00:04.984713 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.984717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.984730 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.984734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984773 | orchestrator | 2026-03-11 01:00:04.984777 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2026-03-11 01:00:04.984781 | orchestrator | Wednesday 11 March 2026 00:57:36 +0000 (0:00:03.205) 0:00:13.152 ******* 2026-03-11 01:00:04.984789 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.984794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:04.984798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.984805 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:04.984812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.984818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:04.984822 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984837 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.984851 | orchestrator | 2026-03-11 01:00:04.984858 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2026-03-11 01:00:04.984864 | orchestrator | Wednesday 11 March 2026 00:57:42 +0000 (0:00:05.321) 0:00:18.473 ******* 2026-03-11 01:00:04.984869 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:04.984875 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:00:04.984881 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:00:04.984887 | orchestrator | 2026-03-11 01:00:04.984894 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2026-03-11 01:00:04.984901 | orchestrator | Wednesday 11 March 2026 00:57:43 +0000 (0:00:01.311) 0:00:19.785 ******* 2026-03-11 01:00:04.984906 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.984912 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:04.984918 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:04.984924 | orchestrator | 2026-03-11 01:00:04.984931 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2026-03-11 01:00:04.984937 | orchestrator | Wednesday 11 March 2026 00:57:44 +0000 (0:00:00.610) 0:00:20.395 ******* 2026-03-11 01:00:04.985052 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.985064 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:04.985070 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:04.985077 | orchestrator | 2026-03-11 01:00:04.985085 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2026-03-11 01:00:04.985092 | orchestrator | Wednesday 11 March 2026 00:57:44 +0000 (0:00:00.294) 0:00:20.690 ******* 2026-03-11 01:00:04.985099 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.985104 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:04.985109 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:04.985114 | orchestrator | 2026-03-11 01:00:04.985118 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2026-03-11 01:00:04.985123 | orchestrator | Wednesday 11 March 2026 00:57:44 +0000 (0:00:00.468) 0:00:21.159 ******* 2026-03-11 01:00:04.985141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:04.985147 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:04.985153 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:04.985162 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.985168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:04.985173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:04.985182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:04.985191 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:04.985200 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2026-03-11 01:00:04.985212 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2026-03-11 01:00:04.985218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2026-03-11 01:00:04.985225 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:04.985231 | orchestrator | 2026-03-11 01:00:04.985237 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-11 01:00:04.985243 | orchestrator | Wednesday 11 March 2026 00:57:45 +0000 (0:00:00.618) 0:00:21.777 ******* 2026-03-11 01:00:04.985248 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.985254 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:04.985260 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:04.985266 | orchestrator | 2026-03-11 01:00:04.985273 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2026-03-11 01:00:04.985278 | orchestrator | Wednesday 11 March 2026 00:57:45 +0000 (0:00:00.330) 0:00:22.108 ******* 2026-03-11 01:00:04.985285 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-11 01:00:04.985293 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-11 01:00:04.985299 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2026-03-11 01:00:04.985303 | orchestrator | 2026-03-11 01:00:04.985307 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2026-03-11 01:00:04.985311 | orchestrator | Wednesday 11 March 2026 00:57:47 +0000 (0:00:01.463) 0:00:23.572 ******* 2026-03-11 01:00:04.985315 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:00:04.985318 | orchestrator | 2026-03-11 01:00:04.985322 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2026-03-11 01:00:04.985326 | orchestrator | Wednesday 11 March 2026 00:57:48 +0000 (0:00:00.954) 0:00:24.526 ******* 2026-03-11 01:00:04.985330 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.985334 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:04.985340 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:04.985346 | orchestrator | 2026-03-11 01:00:04.985355 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2026-03-11 01:00:04.985362 | orchestrator | Wednesday 11 March 2026 00:57:48 +0000 (0:00:00.753) 0:00:25.279 ******* 2026-03-11 01:00:04.985368 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:00:04.985373 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-11 01:00:04.985379 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-11 01:00:04.985385 | orchestrator | 2026-03-11 01:00:04.985391 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2026-03-11 01:00:04.985402 | orchestrator | Wednesday 11 March 2026 00:57:49 +0000 (0:00:01.058) 0:00:26.338 ******* 2026-03-11 01:00:04.985409 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:00:04.985419 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:00:04.985427 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:00:04.985436 | orchestrator | 2026-03-11 01:00:04.985440 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2026-03-11 01:00:04.985444 | orchestrator | Wednesday 11 March 2026 00:57:50 +0000 (0:00:00.299) 0:00:26.638 ******* 2026-03-11 01:00:04.985448 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-11 01:00:04.985451 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-11 01:00:04.985455 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2026-03-11 01:00:04.985459 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-11 01:00:04.985463 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-11 01:00:04.985467 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2026-03-11 01:00:04.985470 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-11 01:00:04.985475 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-11 01:00:04.985478 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2026-03-11 01:00:04.985482 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-11 01:00:04.985486 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-11 01:00:04.985489 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2026-03-11 01:00:04.985561 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-11 01:00:04.985575 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-11 01:00:04.985579 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2026-03-11 01:00:04.985583 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-11 01:00:04.985587 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-11 01:00:04.985591 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-11 01:00:04.985595 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-11 01:00:04.985598 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-11 01:00:04.985602 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-11 01:00:04.985606 | orchestrator | 2026-03-11 01:00:04.985610 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2026-03-11 01:00:04.985613 | orchestrator | Wednesday 11 March 2026 00:57:59 +0000 (0:00:08.859) 0:00:35.497 ******* 2026-03-11 01:00:04.985617 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-11 01:00:04.985621 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-11 01:00:04.985625 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-11 01:00:04.985628 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-11 01:00:04.985632 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-11 01:00:04.985636 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-11 01:00:04.985639 | orchestrator | 2026-03-11 01:00:04.985643 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2026-03-11 01:00:04.985651 | orchestrator | Wednesday 11 March 2026 00:58:01 +0000 (0:00:02.646) 0:00:38.143 ******* 2026-03-11 01:00:04.985664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.985669 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.985674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2026-03-11 01:00:04.985679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:04.985683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:04.985690 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2026-03-11 01:00:04.985698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.985703 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.985707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2026-03-11 01:00:04.985710 | orchestrator | 2026-03-11 01:00:04.985714 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-11 01:00:04.985718 | orchestrator | Wednesday 11 March 2026 00:58:03 +0000 (0:00:02.190) 0:00:40.334 ******* 2026-03-11 01:00:04.985722 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.985726 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:04.985730 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:04.985733 | orchestrator | 2026-03-11 01:00:04.985737 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2026-03-11 01:00:04.985741 | orchestrator | Wednesday 11 March 2026 00:58:04 +0000 (0:00:00.292) 0:00:40.626 ******* 2026-03-11 01:00:04.985745 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:04.985749 | orchestrator | 2026-03-11 01:00:04.985752 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2026-03-11 01:00:04.985756 | orchestrator | Wednesday 11 March 2026 00:58:06 +0000 (0:00:02.218) 0:00:42.844 ******* 2026-03-11 01:00:04.985760 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:04.985764 | orchestrator | 2026-03-11 01:00:04.985771 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2026-03-11 01:00:04.985775 | orchestrator | Wednesday 11 March 2026 00:58:08 +0000 (0:00:02.117) 0:00:44.962 ******* 2026-03-11 01:00:04.985778 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:00:04.985782 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:00:04.985786 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:00:04.985790 | orchestrator | 2026-03-11 01:00:04.985793 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2026-03-11 01:00:04.985797 | orchestrator | Wednesday 11 March 2026 00:58:09 +0000 (0:00:00.990) 0:00:45.952 ******* 2026-03-11 01:00:04.985801 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:00:04.985805 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:00:04.985809 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:00:04.985812 | orchestrator | 2026-03-11 01:00:04.985816 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2026-03-11 01:00:04.985820 | orchestrator | Wednesday 11 March 2026 00:58:09 +0000 (0:00:00.342) 0:00:46.295 ******* 2026-03-11 01:00:04.985824 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.985828 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:04.985832 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:04.985836 | orchestrator | 2026-03-11 01:00:04.985839 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2026-03-11 01:00:04.985843 | orchestrator | Wednesday 11 March 2026 00:58:10 +0000 (0:00:00.510) 0:00:46.806 ******* 2026-03-11 01:00:04.985847 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:04.985851 | orchestrator | 2026-03-11 01:00:04.985855 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2026-03-11 01:00:04.985859 | orchestrator | Wednesday 11 March 2026 00:58:23 +0000 (0:00:13.202) 0:01:00.009 ******* 2026-03-11 01:00:04.985863 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:04.985867 | orchestrator | 2026-03-11 01:00:04.985870 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-11 01:00:04.985874 | orchestrator | Wednesday 11 March 2026 00:58:33 +0000 (0:00:10.306) 0:01:10.316 ******* 2026-03-11 01:00:04.985878 | orchestrator | 2026-03-11 01:00:04.985882 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-11 01:00:04.985886 | orchestrator | Wednesday 11 March 2026 00:58:33 +0000 (0:00:00.065) 0:01:10.381 ******* 2026-03-11 01:00:04.985889 | orchestrator | 2026-03-11 01:00:04.985893 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2026-03-11 01:00:04.985899 | orchestrator | Wednesday 11 March 2026 00:58:34 +0000 (0:00:00.065) 0:01:10.447 ******* 2026-03-11 01:00:04.985903 | orchestrator | 2026-03-11 01:00:04.985909 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2026-03-11 01:00:04.985913 | orchestrator | Wednesday 11 March 2026 00:58:34 +0000 (0:00:00.065) 0:01:10.513 ******* 2026-03-11 01:00:04.985917 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:04.985921 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:00:04.985924 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:00:04.985928 | orchestrator | 2026-03-11 01:00:04.985932 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2026-03-11 01:00:04.985936 | orchestrator | Wednesday 11 March 2026 00:58:48 +0000 (0:00:14.547) 0:01:25.060 ******* 2026-03-11 01:00:04.985939 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:04.985943 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:00:04.985947 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:00:04.985951 | orchestrator | 2026-03-11 01:00:04.985955 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2026-03-11 01:00:04.985958 | orchestrator | Wednesday 11 March 2026 00:58:58 +0000 (0:00:09.791) 0:01:34.851 ******* 2026-03-11 01:00:04.985962 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:04.985966 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:00:04.985970 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:00:04.985973 | orchestrator | 2026-03-11 01:00:04.985977 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-11 01:00:04.985983 | orchestrator | Wednesday 11 March 2026 00:59:04 +0000 (0:00:05.809) 0:01:40.661 ******* 2026-03-11 01:00:04.985987 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:00:04.985991 | orchestrator | 2026-03-11 01:00:04.985995 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2026-03-11 01:00:04.985998 | orchestrator | Wednesday 11 March 2026 00:59:04 +0000 (0:00:00.692) 0:01:41.353 ******* 2026-03-11 01:00:04.986002 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:00:04.986006 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:00:04.986010 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:00:04.986039 | orchestrator | 2026-03-11 01:00:04.986043 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2026-03-11 01:00:04.986048 | orchestrator | Wednesday 11 March 2026 00:59:05 +0000 (0:00:00.769) 0:01:42.123 ******* 2026-03-11 01:00:04.986052 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:00:04.986056 | orchestrator | 2026-03-11 01:00:04.986060 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2026-03-11 01:00:04.986064 | orchestrator | Wednesday 11 March 2026 00:59:07 +0000 (0:00:01.708) 0:01:43.832 ******* 2026-03-11 01:00:04.986067 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2026-03-11 01:00:04.986071 | orchestrator | 2026-03-11 01:00:04.986075 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2026-03-11 01:00:04.986079 | orchestrator | Wednesday 11 March 2026 00:59:20 +0000 (0:00:13.044) 0:01:56.876 ******* 2026-03-11 01:00:04.986083 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2026-03-11 01:00:04.986087 | orchestrator | 2026-03-11 01:00:04.986091 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2026-03-11 01:00:04.986101 | orchestrator | Wednesday 11 March 2026 00:59:49 +0000 (0:00:29.111) 0:02:25.987 ******* 2026-03-11 01:00:04.986106 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2026-03-11 01:00:04.986109 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2026-03-11 01:00:04.986117 | orchestrator | 2026-03-11 01:00:04.986121 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2026-03-11 01:00:04.986125 | orchestrator | Wednesday 11 March 2026 00:59:57 +0000 (0:00:07.954) 0:02:33.941 ******* 2026-03-11 01:00:04.986129 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.986132 | orchestrator | 2026-03-11 01:00:04.986136 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2026-03-11 01:00:04.986140 | orchestrator | Wednesday 11 March 2026 00:59:57 +0000 (0:00:00.100) 0:02:34.041 ******* 2026-03-11 01:00:04.986144 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.986148 | orchestrator | 2026-03-11 01:00:04.986152 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2026-03-11 01:00:04.986155 | orchestrator | Wednesday 11 March 2026 00:59:57 +0000 (0:00:00.105) 0:02:34.147 ******* 2026-03-11 01:00:04.986159 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.986163 | orchestrator | 2026-03-11 01:00:04.986167 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2026-03-11 01:00:04.986171 | orchestrator | Wednesday 11 March 2026 00:59:57 +0000 (0:00:00.105) 0:02:34.253 ******* 2026-03-11 01:00:04.986174 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.986178 | orchestrator | 2026-03-11 01:00:04.986182 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2026-03-11 01:00:04.986186 | orchestrator | Wednesday 11 March 2026 00:59:58 +0000 (0:00:00.437) 0:02:34.691 ******* 2026-03-11 01:00:04.986190 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:00:04.986194 | orchestrator | 2026-03-11 01:00:04.986197 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2026-03-11 01:00:04.986201 | orchestrator | Wednesday 11 March 2026 01:00:02 +0000 (0:00:03.870) 0:02:38.561 ******* 2026-03-11 01:00:04.986208 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:00:04.986211 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:00:04.986215 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:00:04.986219 | orchestrator | 2026-03-11 01:00:04.986223 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:00:04.986228 | orchestrator | testbed-node-0 : ok=33  changed=19  unreachable=0 failed=0 skipped=17  rescued=0 ignored=0 2026-03-11 01:00:04.986235 | orchestrator | testbed-node-1 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-11 01:00:04.986241 | orchestrator | testbed-node-2 : ok=22  changed=12  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-11 01:00:04.986245 | orchestrator | 2026-03-11 01:00:04.986249 | orchestrator | 2026-03-11 01:00:04.986253 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:00:04.986257 | orchestrator | Wednesday 11 March 2026 01:00:02 +0000 (0:00:00.462) 0:02:39.023 ******* 2026-03-11 01:00:04.986261 | orchestrator | =============================================================================== 2026-03-11 01:00:04.986264 | orchestrator | service-ks-register : keystone | Creating services --------------------- 29.11s 2026-03-11 01:00:04.986268 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.55s 2026-03-11 01:00:04.986272 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 13.20s 2026-03-11 01:00:04.986276 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 13.04s 2026-03-11 01:00:04.986280 | orchestrator | keystone : Running Keystone fernet bootstrap container ----------------- 10.31s 2026-03-11 01:00:04.986283 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.79s 2026-03-11 01:00:04.986287 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 8.86s 2026-03-11 01:00:04.986291 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.95s 2026-03-11 01:00:04.986295 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.81s 2026-03-11 01:00:04.986299 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.32s 2026-03-11 01:00:04.986303 | orchestrator | keystone : Creating default user role ----------------------------------- 3.87s 2026-03-11 01:00:04.986306 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.40s 2026-03-11 01:00:04.986310 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.20s 2026-03-11 01:00:04.986314 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 2.65s 2026-03-11 01:00:04.986318 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.22s 2026-03-11 01:00:04.986322 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.19s 2026-03-11 01:00:04.986325 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.12s 2026-03-11 01:00:04.986329 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 1.96s 2026-03-11 01:00:04.986333 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.71s 2026-03-11 01:00:04.986337 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.46s 2026-03-11 01:00:04.986341 | orchestrator | 2026-03-11 01:00:04 | INFO  | Task 87a80170-617a-48a4-9d1f-e57e96781bae is in state STARTED 2026-03-11 01:00:04.986344 | orchestrator | 2026-03-11 01:00:04 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:04.986348 | orchestrator | 2026-03-11 01:00:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:08.019446 | orchestrator | 2026-03-11 01:00:08 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:08.019540 | orchestrator | 2026-03-11 01:00:08 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:08.019569 | orchestrator | 2026-03-11 01:00:08 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:08.019576 | orchestrator | 2026-03-11 01:00:08 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:08.019581 | orchestrator | 2026-03-11 01:00:08 | INFO  | Task 87a80170-617a-48a4-9d1f-e57e96781bae is in state SUCCESS 2026-03-11 01:00:08.019586 | orchestrator | 2026-03-11 01:00:08 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:08.019592 | orchestrator | 2026-03-11 01:00:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:11.067453 | orchestrator | 2026-03-11 01:00:11 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:11.067559 | orchestrator | 2026-03-11 01:00:11 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:11.067565 | orchestrator | 2026-03-11 01:00:11 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:11.067570 | orchestrator | 2026-03-11 01:00:11 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:11.067574 | orchestrator | 2026-03-11 01:00:11 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:11.067578 | orchestrator | 2026-03-11 01:00:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:14.116968 | orchestrator | 2026-03-11 01:00:14 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:14.117031 | orchestrator | 2026-03-11 01:00:14 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:14.117040 | orchestrator | 2026-03-11 01:00:14 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:14.117047 | orchestrator | 2026-03-11 01:00:14 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:14.117054 | orchestrator | 2026-03-11 01:00:14 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:14.117061 | orchestrator | 2026-03-11 01:00:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:17.139918 | orchestrator | 2026-03-11 01:00:17 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:17.141111 | orchestrator | 2026-03-11 01:00:17 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:17.144552 | orchestrator | 2026-03-11 01:00:17 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:17.146145 | orchestrator | 2026-03-11 01:00:17 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:17.148637 | orchestrator | 2026-03-11 01:00:17 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:17.148772 | orchestrator | 2026-03-11 01:00:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:20.181156 | orchestrator | 2026-03-11 01:00:20 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:20.181932 | orchestrator | 2026-03-11 01:00:20 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:20.182807 | orchestrator | 2026-03-11 01:00:20 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:20.183604 | orchestrator | 2026-03-11 01:00:20 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:20.184629 | orchestrator | 2026-03-11 01:00:20 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:20.184737 | orchestrator | 2026-03-11 01:00:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:23.217341 | orchestrator | 2026-03-11 01:00:23 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:23.218897 | orchestrator | 2026-03-11 01:00:23 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:23.220198 | orchestrator | 2026-03-11 01:00:23 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:23.221791 | orchestrator | 2026-03-11 01:00:23 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:23.223126 | orchestrator | 2026-03-11 01:00:23 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:23.223937 | orchestrator | 2026-03-11 01:00:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:26.263975 | orchestrator | 2026-03-11 01:00:26 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:26.265998 | orchestrator | 2026-03-11 01:00:26 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:26.267749 | orchestrator | 2026-03-11 01:00:26 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:26.269700 | orchestrator | 2026-03-11 01:00:26 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:26.270830 | orchestrator | 2026-03-11 01:00:26 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:26.270870 | orchestrator | 2026-03-11 01:00:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:29.315510 | orchestrator | 2026-03-11 01:00:29 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:29.315593 | orchestrator | 2026-03-11 01:00:29 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:29.315602 | orchestrator | 2026-03-11 01:00:29 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:29.315609 | orchestrator | 2026-03-11 01:00:29 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:29.315616 | orchestrator | 2026-03-11 01:00:29 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:29.315623 | orchestrator | 2026-03-11 01:00:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:32.345841 | orchestrator | 2026-03-11 01:00:32 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:32.350092 | orchestrator | 2026-03-11 01:00:32 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:32.353680 | orchestrator | 2026-03-11 01:00:32 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:32.357654 | orchestrator | 2026-03-11 01:00:32 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:32.357960 | orchestrator | 2026-03-11 01:00:32 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:32.358002 | orchestrator | 2026-03-11 01:00:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:35.437715 | orchestrator | 2026-03-11 01:00:35 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:35.437817 | orchestrator | 2026-03-11 01:00:35 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:35.437828 | orchestrator | 2026-03-11 01:00:35 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:35.437860 | orchestrator | 2026-03-11 01:00:35 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:35.437865 | orchestrator | 2026-03-11 01:00:35 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:35.437870 | orchestrator | 2026-03-11 01:00:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:38.462705 | orchestrator | 2026-03-11 01:00:38 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:38.463864 | orchestrator | 2026-03-11 01:00:38 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:38.464437 | orchestrator | 2026-03-11 01:00:38 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:38.465088 | orchestrator | 2026-03-11 01:00:38 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:38.465804 | orchestrator | 2026-03-11 01:00:38 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:38.465844 | orchestrator | 2026-03-11 01:00:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:41.500029 | orchestrator | 2026-03-11 01:00:41 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:41.500925 | orchestrator | 2026-03-11 01:00:41 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:41.501881 | orchestrator | 2026-03-11 01:00:41 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:41.502861 | orchestrator | 2026-03-11 01:00:41 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:41.503660 | orchestrator | 2026-03-11 01:00:41 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:41.503694 | orchestrator | 2026-03-11 01:00:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:44.553611 | orchestrator | 2026-03-11 01:00:44 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:44.553680 | orchestrator | 2026-03-11 01:00:44 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:44.557097 | orchestrator | 2026-03-11 01:00:44 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:44.557790 | orchestrator | 2026-03-11 01:00:44 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:44.559193 | orchestrator | 2026-03-11 01:00:44 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:44.559234 | orchestrator | 2026-03-11 01:00:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:47.606606 | orchestrator | 2026-03-11 01:00:47 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:47.606684 | orchestrator | 2026-03-11 01:00:47 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:47.607344 | orchestrator | 2026-03-11 01:00:47 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:47.608890 | orchestrator | 2026-03-11 01:00:47 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:47.609717 | orchestrator | 2026-03-11 01:00:47 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:47.609807 | orchestrator | 2026-03-11 01:00:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:50.634936 | orchestrator | 2026-03-11 01:00:50 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:50.635662 | orchestrator | 2026-03-11 01:00:50 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:50.636895 | orchestrator | 2026-03-11 01:00:50 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:50.637675 | orchestrator | 2026-03-11 01:00:50 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:50.638266 | orchestrator | 2026-03-11 01:00:50 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:50.638339 | orchestrator | 2026-03-11 01:00:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:53.668337 | orchestrator | 2026-03-11 01:00:53 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:53.668393 | orchestrator | 2026-03-11 01:00:53 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:53.668882 | orchestrator | 2026-03-11 01:00:53 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:53.669446 | orchestrator | 2026-03-11 01:00:53 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:53.669924 | orchestrator | 2026-03-11 01:00:53 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:53.669978 | orchestrator | 2026-03-11 01:00:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:56.695924 | orchestrator | 2026-03-11 01:00:56 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:56.696062 | orchestrator | 2026-03-11 01:00:56 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:56.696669 | orchestrator | 2026-03-11 01:00:56 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:56.697240 | orchestrator | 2026-03-11 01:00:56 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:56.697855 | orchestrator | 2026-03-11 01:00:56 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:56.697877 | orchestrator | 2026-03-11 01:00:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:00:59.718965 | orchestrator | 2026-03-11 01:00:59 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:00:59.719029 | orchestrator | 2026-03-11 01:00:59 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:00:59.719742 | orchestrator | 2026-03-11 01:00:59 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:00:59.721812 | orchestrator | 2026-03-11 01:00:59 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:00:59.722190 | orchestrator | 2026-03-11 01:00:59 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:00:59.722217 | orchestrator | 2026-03-11 01:00:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:02.788538 | orchestrator | 2026-03-11 01:01:02 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:02.788599 | orchestrator | 2026-03-11 01:01:02 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:02.788608 | orchestrator | 2026-03-11 01:01:02 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:01:02.788615 | orchestrator | 2026-03-11 01:01:02 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:02.788622 | orchestrator | 2026-03-11 01:01:02 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:02.788628 | orchestrator | 2026-03-11 01:01:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:05.797961 | orchestrator | 2026-03-11 01:01:05 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:05.798359 | orchestrator | 2026-03-11 01:01:05 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:05.799218 | orchestrator | 2026-03-11 01:01:05 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:01:05.799883 | orchestrator | 2026-03-11 01:01:05 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:05.800528 | orchestrator | 2026-03-11 01:01:05 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:05.800561 | orchestrator | 2026-03-11 01:01:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:08.824618 | orchestrator | 2026-03-11 01:01:08 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:08.825322 | orchestrator | 2026-03-11 01:01:08 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:08.825975 | orchestrator | 2026-03-11 01:01:08 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:01:08.827398 | orchestrator | 2026-03-11 01:01:08 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:08.828159 | orchestrator | 2026-03-11 01:01:08 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:08.828189 | orchestrator | 2026-03-11 01:01:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:11.849934 | orchestrator | 2026-03-11 01:01:11 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:11.850060 | orchestrator | 2026-03-11 01:01:11 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:11.850665 | orchestrator | 2026-03-11 01:01:11 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:01:11.851188 | orchestrator | 2026-03-11 01:01:11 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:11.852215 | orchestrator | 2026-03-11 01:01:11 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:11.852242 | orchestrator | 2026-03-11 01:01:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:14.871270 | orchestrator | 2026-03-11 01:01:14 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:14.871366 | orchestrator | 2026-03-11 01:01:14 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:14.872164 | orchestrator | 2026-03-11 01:01:14 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:01:14.872751 | orchestrator | 2026-03-11 01:01:14 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:14.873545 | orchestrator | 2026-03-11 01:01:14 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:14.873574 | orchestrator | 2026-03-11 01:01:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:17.898832 | orchestrator | 2026-03-11 01:01:17 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:17.899135 | orchestrator | 2026-03-11 01:01:17 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:17.899829 | orchestrator | 2026-03-11 01:01:17 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:01:17.900585 | orchestrator | 2026-03-11 01:01:17 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:17.901989 | orchestrator | 2026-03-11 01:01:17 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:17.902041 | orchestrator | 2026-03-11 01:01:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:20.930828 | orchestrator | 2026-03-11 01:01:20 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:20.931228 | orchestrator | 2026-03-11 01:01:20 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:20.931836 | orchestrator | 2026-03-11 01:01:20 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:01:20.932421 | orchestrator | 2026-03-11 01:01:20 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:20.932976 | orchestrator | 2026-03-11 01:01:20 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:20.933002 | orchestrator | 2026-03-11 01:01:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:23.972779 | orchestrator | 2026-03-11 01:01:23 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:23.972967 | orchestrator | 2026-03-11 01:01:23 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:23.973634 | orchestrator | 2026-03-11 01:01:23 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:01:23.974462 | orchestrator | 2026-03-11 01:01:23 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:23.974954 | orchestrator | 2026-03-11 01:01:23 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:23.974987 | orchestrator | 2026-03-11 01:01:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:26.996670 | orchestrator | 2026-03-11 01:01:26 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:26.996859 | orchestrator | 2026-03-11 01:01:26 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:26.997887 | orchestrator | 2026-03-11 01:01:27 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state STARTED 2026-03-11 01:01:26.999274 | orchestrator | 2026-03-11 01:01:27 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:26.999932 | orchestrator | 2026-03-11 01:01:27 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:26.999979 | orchestrator | 2026-03-11 01:01:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:30.054149 | orchestrator | 2026-03-11 01:01:30 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:30.054557 | orchestrator | 2026-03-11 01:01:30 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:30.055441 | orchestrator | 2026-03-11 01:01:30 | INFO  | Task d5966879-e22d-4708-95e8-e8e5b086392e is in state SUCCESS 2026-03-11 01:01:30.056117 | orchestrator | 2026-03-11 01:01:30 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:30.057014 | orchestrator | 2026-03-11 01:01:30 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:30.057056 | orchestrator | 2026-03-11 01:01:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:33.091950 | orchestrator | 2026-03-11 01:01:33 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:33.092461 | orchestrator | 2026-03-11 01:01:33 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:33.093009 | orchestrator | 2026-03-11 01:01:33 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:33.093790 | orchestrator | 2026-03-11 01:01:33 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:33.093818 | orchestrator | 2026-03-11 01:01:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:36.117308 | orchestrator | 2026-03-11 01:01:36 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:36.117440 | orchestrator | 2026-03-11 01:01:36 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:36.120535 | orchestrator | 2026-03-11 01:01:36 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:36.120999 | orchestrator | 2026-03-11 01:01:36 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:36.121026 | orchestrator | 2026-03-11 01:01:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:39.158762 | orchestrator | 2026-03-11 01:01:39 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:39.159302 | orchestrator | 2026-03-11 01:01:39 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:39.159882 | orchestrator | 2026-03-11 01:01:39 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:39.162718 | orchestrator | 2026-03-11 01:01:39 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:39.162770 | orchestrator | 2026-03-11 01:01:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:42.196470 | orchestrator | 2026-03-11 01:01:42 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:42.196800 | orchestrator | 2026-03-11 01:01:42 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:42.197695 | orchestrator | 2026-03-11 01:01:42 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:42.198490 | orchestrator | 2026-03-11 01:01:42 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:42.199434 | orchestrator | 2026-03-11 01:01:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:45.219005 | orchestrator | 2026-03-11 01:01:45 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:45.220536 | orchestrator | 2026-03-11 01:01:45 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:45.221075 | orchestrator | 2026-03-11 01:01:45 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:45.221834 | orchestrator | 2026-03-11 01:01:45 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:45.221862 | orchestrator | 2026-03-11 01:01:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:48.244107 | orchestrator | 2026-03-11 01:01:48 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:48.244950 | orchestrator | 2026-03-11 01:01:48 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:48.245196 | orchestrator | 2026-03-11 01:01:48 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:48.246179 | orchestrator | 2026-03-11 01:01:48 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:48.246238 | orchestrator | 2026-03-11 01:01:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:51.270851 | orchestrator | 2026-03-11 01:01:51 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:51.271513 | orchestrator | 2026-03-11 01:01:51 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:51.271903 | orchestrator | 2026-03-11 01:01:51 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:51.274143 | orchestrator | 2026-03-11 01:01:51 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:51.274202 | orchestrator | 2026-03-11 01:01:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:54.316034 | orchestrator | 2026-03-11 01:01:54 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:54.318615 | orchestrator | 2026-03-11 01:01:54 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:54.320547 | orchestrator | 2026-03-11 01:01:54 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:54.321284 | orchestrator | 2026-03-11 01:01:54 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:54.321327 | orchestrator | 2026-03-11 01:01:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:01:57.368752 | orchestrator | 2026-03-11 01:01:57 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:01:57.369616 | orchestrator | 2026-03-11 01:01:57 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:01:57.370508 | orchestrator | 2026-03-11 01:01:57 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:01:57.373159 | orchestrator | 2026-03-11 01:01:57 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:01:57.373207 | orchestrator | 2026-03-11 01:01:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:00.414837 | orchestrator | 2026-03-11 01:02:00 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:00.415144 | orchestrator | 2026-03-11 01:02:00 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:02:00.415873 | orchestrator | 2026-03-11 01:02:00 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:00.416913 | orchestrator | 2026-03-11 01:02:00 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:00.416938 | orchestrator | 2026-03-11 01:02:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:03.526529 | orchestrator | 2026-03-11 01:02:03 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:03.526746 | orchestrator | 2026-03-11 01:02:03 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:02:03.527556 | orchestrator | 2026-03-11 01:02:03 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:03.528235 | orchestrator | 2026-03-11 01:02:03 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:03.528250 | orchestrator | 2026-03-11 01:02:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:06.586854 | orchestrator | 2026-03-11 01:02:06 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:06.586926 | orchestrator | 2026-03-11 01:02:06 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:02:06.586933 | orchestrator | 2026-03-11 01:02:06 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:06.587900 | orchestrator | 2026-03-11 01:02:06 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:06.587972 | orchestrator | 2026-03-11 01:02:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:09.613051 | orchestrator | 2026-03-11 01:02:09 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:09.614255 | orchestrator | 2026-03-11 01:02:09 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:02:09.614746 | orchestrator | 2026-03-11 01:02:09 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:09.615407 | orchestrator | 2026-03-11 01:02:09 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:09.615457 | orchestrator | 2026-03-11 01:02:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:12.643945 | orchestrator | 2026-03-11 01:02:12 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:12.644004 | orchestrator | 2026-03-11 01:02:12 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:02:12.644259 | orchestrator | 2026-03-11 01:02:12 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:12.644998 | orchestrator | 2026-03-11 01:02:12 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:12.645034 | orchestrator | 2026-03-11 01:02:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:15.675589 | orchestrator | 2026-03-11 01:02:15 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:15.678316 | orchestrator | 2026-03-11 01:02:15 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state STARTED 2026-03-11 01:02:15.679588 | orchestrator | 2026-03-11 01:02:15 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:15.681004 | orchestrator | 2026-03-11 01:02:15 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:15.681041 | orchestrator | 2026-03-11 01:02:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:18.712882 | orchestrator | 2026-03-11 01:02:18 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:18.713850 | orchestrator | 2026-03-11 01:02:18 | INFO  | Task e2c043b8-0e3d-4ca4-a473-f82294750f4c is in state SUCCESS 2026-03-11 01:02:18.715179 | orchestrator | 2026-03-11 01:02:18.715213 | orchestrator | 2026-03-11 01:02:18.715220 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:02:18.715225 | orchestrator | 2026-03-11 01:02:18.715230 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:02:18.715235 | orchestrator | Wednesday 11 March 2026 01:00:04 +0000 (0:00:00.184) 0:00:00.184 ******* 2026-03-11 01:02:18.715240 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:02:18.715245 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:02:18.715250 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:02:18.715255 | orchestrator | 2026-03-11 01:02:18.715259 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:02:18.715264 | orchestrator | Wednesday 11 March 2026 01:00:04 +0000 (0:00:00.618) 0:00:00.803 ******* 2026-03-11 01:02:18.715269 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2026-03-11 01:02:18.715274 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2026-03-11 01:02:18.715278 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2026-03-11 01:02:18.715282 | orchestrator | 2026-03-11 01:02:18.715287 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2026-03-11 01:02:18.715291 | orchestrator | 2026-03-11 01:02:18.715296 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2026-03-11 01:02:18.715301 | orchestrator | Wednesday 11 March 2026 01:00:05 +0000 (0:00:00.840) 0:00:01.643 ******* 2026-03-11 01:02:18.715317 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:02:18.715322 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:02:18.715340 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:02:18.715346 | orchestrator | 2026-03-11 01:02:18.715351 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:02:18.715356 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:02:18.715361 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:02:18.715367 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:02:18.715375 | orchestrator | 2026-03-11 01:02:18.715382 | orchestrator | 2026-03-11 01:02:18.715391 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:02:18.715402 | orchestrator | Wednesday 11 March 2026 01:00:06 +0000 (0:00:00.841) 0:00:02.485 ******* 2026-03-11 01:02:18.715412 | orchestrator | =============================================================================== 2026-03-11 01:02:18.715419 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.84s 2026-03-11 01:02:18.715427 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2026-03-11 01:02:18.715443 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.62s 2026-03-11 01:02:18.715451 | orchestrator | 2026-03-11 01:02:18.715459 | orchestrator | [WARNING]: Collection community.general does not support Ansible version 2026-03-11 01:02:18.715467 | orchestrator | 2.16.14 2026-03-11 01:02:18.715475 | orchestrator | 2026-03-11 01:02:18.715483 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2026-03-11 01:02:18.715488 | orchestrator | 2026-03-11 01:02:18.715493 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2026-03-11 01:02:18.715497 | orchestrator | Wednesday 11 March 2026 01:00:04 +0000 (0:00:00.302) 0:00:00.302 ******* 2026-03-11 01:02:18.715502 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:18.715506 | orchestrator | 2026-03-11 01:02:18.715511 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2026-03-11 01:02:18.715516 | orchestrator | Wednesday 11 March 2026 01:00:06 +0000 (0:00:01.822) 0:00:02.125 ******* 2026-03-11 01:02:18.715520 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:18.715525 | orchestrator | 2026-03-11 01:02:18.715530 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2026-03-11 01:02:18.715534 | orchestrator | Wednesday 11 March 2026 01:00:06 +0000 (0:00:00.942) 0:00:03.067 ******* 2026-03-11 01:02:18.715539 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:18.715543 | orchestrator | 2026-03-11 01:02:18.715548 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2026-03-11 01:02:18.715552 | orchestrator | Wednesday 11 March 2026 01:00:07 +0000 (0:00:00.800) 0:00:03.868 ******* 2026-03-11 01:02:18.715557 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:18.715561 | orchestrator | 2026-03-11 01:02:18.715566 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2026-03-11 01:02:18.715757 | orchestrator | Wednesday 11 March 2026 01:00:08 +0000 (0:00:01.057) 0:00:04.925 ******* 2026-03-11 01:02:18.715763 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:18.715767 | orchestrator | 2026-03-11 01:02:18.715772 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2026-03-11 01:02:18.715777 | orchestrator | Wednesday 11 March 2026 01:00:09 +0000 (0:00:00.959) 0:00:05.885 ******* 2026-03-11 01:02:18.715781 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:18.715786 | orchestrator | 2026-03-11 01:02:18.715790 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2026-03-11 01:02:18.715798 | orchestrator | Wednesday 11 March 2026 01:00:10 +0000 (0:00:01.063) 0:00:06.948 ******* 2026-03-11 01:02:18.715816 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:18.715826 | orchestrator | 2026-03-11 01:02:18.715834 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2026-03-11 01:02:18.715842 | orchestrator | Wednesday 11 March 2026 01:00:12 +0000 (0:00:02.035) 0:00:08.984 ******* 2026-03-11 01:02:18.715849 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:18.715857 | orchestrator | 2026-03-11 01:02:18.715865 | orchestrator | TASK [Create admin user] ******************************************************* 2026-03-11 01:02:18.715873 | orchestrator | Wednesday 11 March 2026 01:00:13 +0000 (0:00:01.040) 0:00:10.024 ******* 2026-03-11 01:02:18.715881 | orchestrator | changed: [testbed-manager] 2026-03-11 01:02:18.715889 | orchestrator | 2026-03-11 01:02:18.715928 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2026-03-11 01:02:18.715936 | orchestrator | Wednesday 11 March 2026 01:01:14 +0000 (0:01:00.621) 0:01:10.645 ******* 2026-03-11 01:02:18.715941 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:02:18.715945 | orchestrator | 2026-03-11 01:02:18.715950 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-11 01:02:18.715954 | orchestrator | 2026-03-11 01:02:18.715959 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-11 01:02:18.715964 | orchestrator | Wednesday 11 March 2026 01:01:14 +0000 (0:00:00.115) 0:01:10.761 ******* 2026-03-11 01:02:18.715968 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:18.715973 | orchestrator | 2026-03-11 01:02:18.715977 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-11 01:02:18.715982 | orchestrator | 2026-03-11 01:02:18.715986 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-11 01:02:18.715990 | orchestrator | Wednesday 11 March 2026 01:01:16 +0000 (0:00:01.422) 0:01:12.183 ******* 2026-03-11 01:02:18.715995 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:02:18.715999 | orchestrator | 2026-03-11 01:02:18.716004 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2026-03-11 01:02:18.716008 | orchestrator | 2026-03-11 01:02:18.716013 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2026-03-11 01:02:18.716017 | orchestrator | Wednesday 11 March 2026 01:01:17 +0000 (0:00:01.106) 0:01:13.290 ******* 2026-03-11 01:02:18.716022 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:02:18.716026 | orchestrator | 2026-03-11 01:02:18.716031 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:02:18.716035 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2026-03-11 01:02:18.716041 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:02:18.716045 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:02:18.716050 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:02:18.716054 | orchestrator | 2026-03-11 01:02:18.716059 | orchestrator | 2026-03-11 01:02:18.716063 | orchestrator | 2026-03-11 01:02:18.716068 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:02:18.716072 | orchestrator | Wednesday 11 March 2026 01:01:28 +0000 (0:00:11.221) 0:01:24.512 ******* 2026-03-11 01:02:18.716077 | orchestrator | =============================================================================== 2026-03-11 01:02:18.716081 | orchestrator | Create admin user ------------------------------------------------------ 60.62s 2026-03-11 01:02:18.716092 | orchestrator | Restart ceph manager service ------------------------------------------- 13.75s 2026-03-11 01:02:18.716096 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 2.04s 2026-03-11 01:02:18.716101 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.82s 2026-03-11 01:02:18.716110 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.06s 2026-03-11 01:02:18.716115 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.06s 2026-03-11 01:02:18.716119 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.04s 2026-03-11 01:02:18.716123 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.96s 2026-03-11 01:02:18.716128 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.94s 2026-03-11 01:02:18.716132 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.80s 2026-03-11 01:02:18.716137 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.11s 2026-03-11 01:02:18.716141 | orchestrator | 2026-03-11 01:02:18.716146 | orchestrator | 2026-03-11 01:02:18.716150 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:02:18.716155 | orchestrator | 2026-03-11 01:02:18.716159 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:02:18.716164 | orchestrator | Wednesday 11 March 2026 01:00:08 +0000 (0:00:00.466) 0:00:00.466 ******* 2026-03-11 01:02:18.716168 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:02:18.716173 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:02:18.716177 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:02:18.716182 | orchestrator | 2026-03-11 01:02:18.716187 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:02:18.716191 | orchestrator | Wednesday 11 March 2026 01:00:09 +0000 (0:00:00.559) 0:00:01.026 ******* 2026-03-11 01:02:18.716196 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2026-03-11 01:02:18.716201 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2026-03-11 01:02:18.716205 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2026-03-11 01:02:18.716209 | orchestrator | 2026-03-11 01:02:18.716214 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2026-03-11 01:02:18.716218 | orchestrator | 2026-03-11 01:02:18.716223 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-11 01:02:18.716227 | orchestrator | Wednesday 11 March 2026 01:00:10 +0000 (0:00:00.999) 0:00:02.026 ******* 2026-03-11 01:02:18.716232 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:02:18.716237 | orchestrator | 2026-03-11 01:02:18.716241 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2026-03-11 01:02:18.716246 | orchestrator | Wednesday 11 March 2026 01:00:10 +0000 (0:00:00.657) 0:00:02.683 ******* 2026-03-11 01:02:18.716250 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2026-03-11 01:02:18.716255 | orchestrator | 2026-03-11 01:02:18.716262 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2026-03-11 01:02:18.716267 | orchestrator | Wednesday 11 March 2026 01:00:14 +0000 (0:00:03.978) 0:00:06.662 ******* 2026-03-11 01:02:18.716271 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2026-03-11 01:02:18.716276 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2026-03-11 01:02:18.716280 | orchestrator | 2026-03-11 01:02:18.716285 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2026-03-11 01:02:18.716289 | orchestrator | Wednesday 11 March 2026 01:00:22 +0000 (0:00:07.506) 0:00:14.168 ******* 2026-03-11 01:02:18.716294 | orchestrator | changed: [testbed-node-0] => (item=service) 2026-03-11 01:02:18.716298 | orchestrator | 2026-03-11 01:02:18.716305 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2026-03-11 01:02:18.716313 | orchestrator | Wednesday 11 March 2026 01:00:26 +0000 (0:00:04.163) 0:00:18.332 ******* 2026-03-11 01:02:18.716320 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2026-03-11 01:02:18.716350 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:02:18.716367 | orchestrator | 2026-03-11 01:02:18.716377 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2026-03-11 01:02:18.716389 | orchestrator | Wednesday 11 March 2026 01:00:30 +0000 (0:00:04.321) 0:00:22.653 ******* 2026-03-11 01:02:18.716397 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:02:18.716405 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2026-03-11 01:02:18.716414 | orchestrator | changed: [testbed-node-0] => (item=creator) 2026-03-11 01:02:18.716422 | orchestrator | changed: [testbed-node-0] => (item=observer) 2026-03-11 01:02:18.716428 | orchestrator | changed: [testbed-node-0] => (item=audit) 2026-03-11 01:02:18.716433 | orchestrator | 2026-03-11 01:02:18.716439 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2026-03-11 01:02:18.716445 | orchestrator | Wednesday 11 March 2026 01:00:48 +0000 (0:00:17.336) 0:00:39.989 ******* 2026-03-11 01:02:18.716450 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2026-03-11 01:02:18.716455 | orchestrator | 2026-03-11 01:02:18.716461 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2026-03-11 01:02:18.716466 | orchestrator | Wednesday 11 March 2026 01:00:52 +0000 (0:00:03.779) 0:00:43.769 ******* 2026-03-11 01:02:18.716477 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.716485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.716498 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.716509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716525 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716552 | orchestrator | 2026-03-11 01:02:18.716557 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2026-03-11 01:02:18.716564 | orchestrator | Wednesday 11 March 2026 01:00:54 +0000 (0:00:02.348) 0:00:46.117 ******* 2026-03-11 01:02:18.716569 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2026-03-11 01:02:18.716574 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2026-03-11 01:02:18.716578 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2026-03-11 01:02:18.716583 | orchestrator | 2026-03-11 01:02:18.716588 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2026-03-11 01:02:18.716592 | orchestrator | Wednesday 11 March 2026 01:00:56 +0000 (0:00:01.901) 0:00:48.018 ******* 2026-03-11 01:02:18.716597 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:18.716603 | orchestrator | 2026-03-11 01:02:18.716614 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2026-03-11 01:02:18.716623 | orchestrator | Wednesday 11 March 2026 01:00:56 +0000 (0:00:00.210) 0:00:48.229 ******* 2026-03-11 01:02:18.716631 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:18.716639 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:02:18.716647 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:02:18.716656 | orchestrator | 2026-03-11 01:02:18.716661 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-11 01:02:18.716666 | orchestrator | Wednesday 11 March 2026 01:00:57 +0000 (0:00:01.054) 0:00:49.283 ******* 2026-03-11 01:02:18.716670 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:02:18.716675 | orchestrator | 2026-03-11 01:02:18.716679 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2026-03-11 01:02:18.716684 | orchestrator | Wednesday 11 March 2026 01:00:58 +0000 (0:00:00.953) 0:00:50.237 ******* 2026-03-11 01:02:18.716692 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.716698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.716703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.716718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716745 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716753 | orchestrator | 2026-03-11 01:02:18.716757 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2026-03-11 01:02:18.716762 | orchestrator | Wednesday 11 March 2026 01:01:01 +0000 (0:00:03.299) 0:00:53.537 ******* 2026-03-11 01:02:18.716770 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:18.716775 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.716780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.716785 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:18.716794 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:18.716800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.716810 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.716815 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:02:18.716820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:18.716825 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.716831 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.716836 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:02:18.716841 | orchestrator | 2026-03-11 01:02:18.716845 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2026-03-11 01:02:18.716850 | orchestrator | Wednesday 11 March 2026 01:01:03 +0000 (0:00:01.428) 0:00:54.965 ******* 2026-03-11 01:02:18.716855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:18.716863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.716870 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.716875 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:18.716880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:18.716887 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.716892 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.716900 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:02:18.716905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:18.716913 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.716918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.716923 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:02:18.716927 | orchestrator | 2026-03-11 01:02:18.716932 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2026-03-11 01:02:18.716937 | orchestrator | Wednesday 11 March 2026 01:01:04 +0000 (0:00:00.921) 0:00:55.886 ******* 2026-03-11 01:02:18.716942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.716949 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.716956 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.716965 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716990 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.716999 | orchestrator | 2026-03-11 01:02:18.717004 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2026-03-11 01:02:18.717009 | orchestrator | Wednesday 11 March 2026 01:01:08 +0000 (0:00:03.980) 0:00:59.867 ******* 2026-03-11 01:02:18.717013 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:18.717018 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:02:18.717023 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:02:18.717027 | orchestrator | 2026-03-11 01:02:18.717032 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2026-03-11 01:02:18.717037 | orchestrator | Wednesday 11 March 2026 01:01:10 +0000 (0:00:02.490) 0:01:02.358 ******* 2026-03-11 01:02:18.717041 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:02:18.717046 | orchestrator | 2026-03-11 01:02:18.717051 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2026-03-11 01:02:18.717056 | orchestrator | Wednesday 11 March 2026 01:01:12 +0000 (0:00:01.869) 0:01:04.227 ******* 2026-03-11 01:02:18.717063 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:18.717068 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:02:18.717072 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:02:18.717077 | orchestrator | 2026-03-11 01:02:18.717082 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2026-03-11 01:02:18.717086 | orchestrator | Wednesday 11 March 2026 01:01:13 +0000 (0:00:01.068) 0:01:05.296 ******* 2026-03-11 01:02:18.717091 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.717099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.717107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.717112 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.717120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.717128 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.717138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.717167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.717175 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.717183 | orchestrator | 2026-03-11 01:02:18.717191 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2026-03-11 01:02:18.717198 | orchestrator | Wednesday 11 March 2026 01:01:24 +0000 (0:00:11.262) 0:01:16.559 ******* 2026-03-11 01:02:18.717206 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:18.717217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.717225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.717233 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:18.717241 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:18.717260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.717269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.717278 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:02:18.717285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2026-03-11 01:02:18.717293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.717299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:02:18.717310 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:02:18.717316 | orchestrator | 2026-03-11 01:02:18.717321 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2026-03-11 01:02:18.717450 | orchestrator | Wednesday 11 March 2026 01:01:26 +0000 (0:00:01.551) 0:01:18.110 ******* 2026-03-11 01:02:18.717468 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.717474 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.717484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2026-03-11 01:02:18.717489 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.717499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.717507 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.717512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.717517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.717522 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:02:18.717527 | orchestrator | 2026-03-11 01:02:18.717531 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2026-03-11 01:02:18.717536 | orchestrator | Wednesday 11 March 2026 01:01:31 +0000 (0:00:04.841) 0:01:22.952 ******* 2026-03-11 01:02:18.717541 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:02:18.717546 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:02:18.717551 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:02:18.717555 | orchestrator | 2026-03-11 01:02:18.717563 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2026-03-11 01:02:18.717568 | orchestrator | Wednesday 11 March 2026 01:01:31 +0000 (0:00:00.553) 0:01:23.506 ******* 2026-03-11 01:02:18.717572 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:18.717577 | orchestrator | 2026-03-11 01:02:18.717582 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2026-03-11 01:02:18.717590 | orchestrator | Wednesday 11 March 2026 01:01:33 +0000 (0:00:02.070) 0:01:25.577 ******* 2026-03-11 01:02:18.717595 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:18.717599 | orchestrator | 2026-03-11 01:02:18.717604 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2026-03-11 01:02:18.717609 | orchestrator | Wednesday 11 March 2026 01:01:36 +0000 (0:00:02.706) 0:01:28.283 ******* 2026-03-11 01:02:18.717614 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:18.717619 | orchestrator | 2026-03-11 01:02:18.717624 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-11 01:02:18.717628 | orchestrator | Wednesday 11 March 2026 01:01:46 +0000 (0:00:10.338) 0:01:38.621 ******* 2026-03-11 01:02:18.717633 | orchestrator | 2026-03-11 01:02:18.717638 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-11 01:02:18.717642 | orchestrator | Wednesday 11 March 2026 01:01:46 +0000 (0:00:00.061) 0:01:38.682 ******* 2026-03-11 01:02:18.717647 | orchestrator | 2026-03-11 01:02:18.717652 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2026-03-11 01:02:18.717656 | orchestrator | Wednesday 11 March 2026 01:01:47 +0000 (0:00:00.061) 0:01:38.744 ******* 2026-03-11 01:02:18.717661 | orchestrator | 2026-03-11 01:02:18.717665 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2026-03-11 01:02:18.717670 | orchestrator | Wednesday 11 March 2026 01:01:47 +0000 (0:00:00.063) 0:01:38.807 ******* 2026-03-11 01:02:18.717675 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:18.717680 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:02:18.717684 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:02:18.717689 | orchestrator | 2026-03-11 01:02:18.717694 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2026-03-11 01:02:18.717699 | orchestrator | Wednesday 11 March 2026 01:01:53 +0000 (0:00:06.562) 0:01:45.370 ******* 2026-03-11 01:02:18.717703 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:18.717708 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:02:18.717713 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:02:18.717717 | orchestrator | 2026-03-11 01:02:18.717722 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2026-03-11 01:02:18.717727 | orchestrator | Wednesday 11 March 2026 01:02:03 +0000 (0:00:10.157) 0:01:55.528 ******* 2026-03-11 01:02:18.717732 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:02:18.717737 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:02:18.717741 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:02:18.717746 | orchestrator | 2026-03-11 01:02:18.717751 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:02:18.717758 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:02:18.717764 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 01:02:18.717769 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 01:02:18.717774 | orchestrator | 2026-03-11 01:02:18.717778 | orchestrator | 2026-03-11 01:02:18.717783 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:02:18.717788 | orchestrator | Wednesday 11 March 2026 01:02:16 +0000 (0:00:12.784) 0:02:08.313 ******* 2026-03-11 01:02:18.717793 | orchestrator | =============================================================================== 2026-03-11 01:02:18.717797 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.34s 2026-03-11 01:02:18.717802 | orchestrator | barbican : Restart barbican-worker container --------------------------- 12.78s 2026-03-11 01:02:18.717806 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 11.26s 2026-03-11 01:02:18.717811 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 10.34s 2026-03-11 01:02:18.717819 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.16s 2026-03-11 01:02:18.717824 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 7.51s 2026-03-11 01:02:18.717828 | orchestrator | barbican : Restart barbican-api container ------------------------------- 6.56s 2026-03-11 01:02:18.717833 | orchestrator | barbican : Check barbican containers ------------------------------------ 4.84s 2026-03-11 01:02:18.717838 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.32s 2026-03-11 01:02:18.717843 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 4.16s 2026-03-11 01:02:18.717847 | orchestrator | barbican : Copying over config.json files for services ------------------ 3.98s 2026-03-11 01:02:18.717852 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.98s 2026-03-11 01:02:18.717856 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 3.78s 2026-03-11 01:02:18.717861 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 3.30s 2026-03-11 01:02:18.717865 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.71s 2026-03-11 01:02:18.717870 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.49s 2026-03-11 01:02:18.717875 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.35s 2026-03-11 01:02:18.717882 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.07s 2026-03-11 01:02:18.717887 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.90s 2026-03-11 01:02:18.717892 | orchestrator | barbican : Checking whether barbican-api-paste.ini file exists ---------- 1.87s 2026-03-11 01:02:18.717896 | orchestrator | 2026-03-11 01:02:18 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:18.717901 | orchestrator | 2026-03-11 01:02:18 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:18.717990 | orchestrator | 2026-03-11 01:02:18 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:18.718192 | orchestrator | 2026-03-11 01:02:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:21.747906 | orchestrator | 2026-03-11 01:02:21 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:21.748437 | orchestrator | 2026-03-11 01:02:21 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:21.749232 | orchestrator | 2026-03-11 01:02:21 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:21.750110 | orchestrator | 2026-03-11 01:02:21 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:21.750157 | orchestrator | 2026-03-11 01:02:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:24.778112 | orchestrator | 2026-03-11 01:02:24 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:24.778212 | orchestrator | 2026-03-11 01:02:24 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:24.778761 | orchestrator | 2026-03-11 01:02:24 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:24.779483 | orchestrator | 2026-03-11 01:02:24 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:24.779507 | orchestrator | 2026-03-11 01:02:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:27.826734 | orchestrator | 2026-03-11 01:02:27 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:27.827141 | orchestrator | 2026-03-11 01:02:27 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:27.827288 | orchestrator | 2026-03-11 01:02:27 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:27.828002 | orchestrator | 2026-03-11 01:02:27 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:27.828233 | orchestrator | 2026-03-11 01:02:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:30.859568 | orchestrator | 2026-03-11 01:02:30 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:30.859853 | orchestrator | 2026-03-11 01:02:30 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:30.860539 | orchestrator | 2026-03-11 01:02:30 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:30.861036 | orchestrator | 2026-03-11 01:02:30 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:30.861063 | orchestrator | 2026-03-11 01:02:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:33.900932 | orchestrator | 2026-03-11 01:02:33 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:33.901121 | orchestrator | 2026-03-11 01:02:33 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:33.901698 | orchestrator | 2026-03-11 01:02:33 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:33.902231 | orchestrator | 2026-03-11 01:02:33 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:33.902254 | orchestrator | 2026-03-11 01:02:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:36.934754 | orchestrator | 2026-03-11 01:02:36 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:36.935172 | orchestrator | 2026-03-11 01:02:36 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:36.935743 | orchestrator | 2026-03-11 01:02:36 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:36.936222 | orchestrator | 2026-03-11 01:02:36 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:36.936292 | orchestrator | 2026-03-11 01:02:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:39.967315 | orchestrator | 2026-03-11 01:02:39 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:39.967807 | orchestrator | 2026-03-11 01:02:39 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:39.970510 | orchestrator | 2026-03-11 01:02:39 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:39.971090 | orchestrator | 2026-03-11 01:02:39 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:39.971112 | orchestrator | 2026-03-11 01:02:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:43.013433 | orchestrator | 2026-03-11 01:02:43 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:43.013496 | orchestrator | 2026-03-11 01:02:43 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:43.016352 | orchestrator | 2026-03-11 01:02:43 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:43.018496 | orchestrator | 2026-03-11 01:02:43 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:43.018555 | orchestrator | 2026-03-11 01:02:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:46.062992 | orchestrator | 2026-03-11 01:02:46 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:46.063125 | orchestrator | 2026-03-11 01:02:46 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:46.065580 | orchestrator | 2026-03-11 01:02:46 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:46.066109 | orchestrator | 2026-03-11 01:02:46 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:46.066144 | orchestrator | 2026-03-11 01:02:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:49.100760 | orchestrator | 2026-03-11 01:02:49 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:49.102181 | orchestrator | 2026-03-11 01:02:49 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:49.104601 | orchestrator | 2026-03-11 01:02:49 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:49.106101 | orchestrator | 2026-03-11 01:02:49 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:49.106199 | orchestrator | 2026-03-11 01:02:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:52.163891 | orchestrator | 2026-03-11 01:02:52 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:52.165892 | orchestrator | 2026-03-11 01:02:52 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:52.168031 | orchestrator | 2026-03-11 01:02:52 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:52.170784 | orchestrator | 2026-03-11 01:02:52 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:52.170827 | orchestrator | 2026-03-11 01:02:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:55.450238 | orchestrator | 2026-03-11 01:02:55 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:55.453694 | orchestrator | 2026-03-11 01:02:55 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:55.456690 | orchestrator | 2026-03-11 01:02:55 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:55.456745 | orchestrator | 2026-03-11 01:02:55 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:55.456753 | orchestrator | 2026-03-11 01:02:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:02:58.493601 | orchestrator | 2026-03-11 01:02:58 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:02:58.493828 | orchestrator | 2026-03-11 01:02:58 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:02:58.495009 | orchestrator | 2026-03-11 01:02:58 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:02:58.496024 | orchestrator | 2026-03-11 01:02:58 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:02:58.496071 | orchestrator | 2026-03-11 01:02:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:01.550213 | orchestrator | 2026-03-11 01:03:01 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:03:01.633805 | orchestrator | 2026-03-11 01:03:01 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:03:01.633848 | orchestrator | 2026-03-11 01:03:01 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:01.633853 | orchestrator | 2026-03-11 01:03:01 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state STARTED 2026-03-11 01:03:01.633874 | orchestrator | 2026-03-11 01:03:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:04.621208 | orchestrator | 2026-03-11 01:03:04 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:03:04.623067 | orchestrator | 2026-03-11 01:03:04 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:03:04.628581 | orchestrator | 2026-03-11 01:03:04 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:04.628634 | orchestrator | 2026-03-11 01:03:04 | INFO  | Task 69971821-62b6-4a37-97e6-220672cfcc64 is in state SUCCESS 2026-03-11 01:03:04.628639 | orchestrator | 2026-03-11 01:03:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:07.686198 | orchestrator | 2026-03-11 01:03:07 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:03:07.690161 | orchestrator | 2026-03-11 01:03:07 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:03:07.693241 | orchestrator | 2026-03-11 01:03:07 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:07.694086 | orchestrator | 2026-03-11 01:03:07 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:07.694193 | orchestrator | 2026-03-11 01:03:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:10.732336 | orchestrator | 2026-03-11 01:03:10 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:03:10.733819 | orchestrator | 2026-03-11 01:03:10 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:03:10.735144 | orchestrator | 2026-03-11 01:03:10 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:10.737051 | orchestrator | 2026-03-11 01:03:10 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:10.737108 | orchestrator | 2026-03-11 01:03:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:13.773909 | orchestrator | 2026-03-11 01:03:13 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:03:13.773960 | orchestrator | 2026-03-11 01:03:13 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:03:13.774879 | orchestrator | 2026-03-11 01:03:13 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:13.775821 | orchestrator | 2026-03-11 01:03:13 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:13.775846 | orchestrator | 2026-03-11 01:03:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:16.844734 | orchestrator | 2026-03-11 01:03:16 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:03:16.846608 | orchestrator | 2026-03-11 01:03:16 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:03:16.848586 | orchestrator | 2026-03-11 01:03:16 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:16.850713 | orchestrator | 2026-03-11 01:03:16 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:16.850763 | orchestrator | 2026-03-11 01:03:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:19.887582 | orchestrator | 2026-03-11 01:03:19 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:03:19.889218 | orchestrator | 2026-03-11 01:03:19 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state STARTED 2026-03-11 01:03:19.891138 | orchestrator | 2026-03-11 01:03:19 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:19.892622 | orchestrator | 2026-03-11 01:03:19 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:19.892679 | orchestrator | 2026-03-11 01:03:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:22.940677 | orchestrator | 2026-03-11 01:03:22 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state STARTED 2026-03-11 01:03:22.942991 | orchestrator | 2026-03-11 01:03:22 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:03:22.947516 | orchestrator | 2026-03-11 01:03:22 | INFO  | Task cf4daf62-4206-46ee-9666-9d553f2c57c3 is in state SUCCESS 2026-03-11 01:03:22.949416 | orchestrator | 2026-03-11 01:03:22.949468 | orchestrator | 2026-03-11 01:03:22.949475 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2026-03-11 01:03:22.949481 | orchestrator | 2026-03-11 01:03:22.949487 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2026-03-11 01:03:22.949493 | orchestrator | Wednesday 11 March 2026 01:02:23 +0000 (0:00:00.086) 0:00:00.086 ******* 2026-03-11 01:03:22.949499 | orchestrator | changed: [localhost] 2026-03-11 01:03:22.949504 | orchestrator | 2026-03-11 01:03:22.949509 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2026-03-11 01:03:22.949514 | orchestrator | Wednesday 11 March 2026 01:02:24 +0000 (0:00:01.806) 0:00:01.893 ******* 2026-03-11 01:03:22.949519 | orchestrator | changed: [localhost] 2026-03-11 01:03:22.949524 | orchestrator | 2026-03-11 01:03:22.949529 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2026-03-11 01:03:22.949534 | orchestrator | Wednesday 11 March 2026 01:02:57 +0000 (0:00:32.475) 0:00:34.368 ******* 2026-03-11 01:03:22.949539 | orchestrator | changed: [localhost] 2026-03-11 01:03:22.949544 | orchestrator | 2026-03-11 01:03:22.949591 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:03:22.949596 | orchestrator | 2026-03-11 01:03:22.949602 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:03:22.949607 | orchestrator | Wednesday 11 March 2026 01:03:02 +0000 (0:00:05.397) 0:00:39.765 ******* 2026-03-11 01:03:22.949613 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:03:22.949618 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:03:22.949623 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:03:22.949629 | orchestrator | 2026-03-11 01:03:22.949634 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:03:22.949639 | orchestrator | Wednesday 11 March 2026 01:03:03 +0000 (0:00:00.431) 0:00:40.196 ******* 2026-03-11 01:03:22.949645 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2026-03-11 01:03:22.949651 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2026-03-11 01:03:22.949656 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2026-03-11 01:03:22.949662 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2026-03-11 01:03:22.949667 | orchestrator | 2026-03-11 01:03:22.949724 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2026-03-11 01:03:22.949731 | orchestrator | skipping: no hosts matched 2026-03-11 01:03:22.949737 | orchestrator | 2026-03-11 01:03:22.949790 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:03:22.950011 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:03:22.950061 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:03:22.950068 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:03:22.950076 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:03:22.950096 | orchestrator | 2026-03-11 01:03:22.950101 | orchestrator | 2026-03-11 01:03:22.950106 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:03:22.950111 | orchestrator | Wednesday 11 March 2026 01:03:03 +0000 (0:00:00.685) 0:00:40.882 ******* 2026-03-11 01:03:22.950117 | orchestrator | =============================================================================== 2026-03-11 01:03:22.950122 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 32.48s 2026-03-11 01:03:22.950127 | orchestrator | Download ironic-agent kernel -------------------------------------------- 5.40s 2026-03-11 01:03:22.950132 | orchestrator | Ensure the destination directory exists --------------------------------- 1.81s 2026-03-11 01:03:22.950137 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.69s 2026-03-11 01:03:22.950142 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2026-03-11 01:03:22.950147 | orchestrator | 2026-03-11 01:03:22.950152 | orchestrator | 2026-03-11 01:03:22.950157 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:03:22.950162 | orchestrator | 2026-03-11 01:03:22.950167 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:03:22.950176 | orchestrator | Wednesday 11 March 2026 01:00:12 +0000 (0:00:00.190) 0:00:00.190 ******* 2026-03-11 01:03:22.950225 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:03:22.950235 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:03:22.950239 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:03:22.950500 | orchestrator | 2026-03-11 01:03:22.950510 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:03:22.950516 | orchestrator | Wednesday 11 March 2026 01:00:12 +0000 (0:00:00.233) 0:00:00.424 ******* 2026-03-11 01:03:22.950667 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2026-03-11 01:03:22.950672 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2026-03-11 01:03:22.950676 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2026-03-11 01:03:22.950679 | orchestrator | 2026-03-11 01:03:22.950682 | orchestrator | PLAY [Apply role designate] **************************************************** 2026-03-11 01:03:22.950685 | orchestrator | 2026-03-11 01:03:22.950688 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-11 01:03:22.950692 | orchestrator | Wednesday 11 March 2026 01:00:12 +0000 (0:00:00.424) 0:00:00.848 ******* 2026-03-11 01:03:22.950696 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:03:22.950702 | orchestrator | 2026-03-11 01:03:22.950707 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2026-03-11 01:03:22.950712 | orchestrator | Wednesday 11 March 2026 01:00:14 +0000 (0:00:01.087) 0:00:01.936 ******* 2026-03-11 01:03:22.950743 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2026-03-11 01:03:22.950751 | orchestrator | 2026-03-11 01:03:22.950754 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2026-03-11 01:03:22.950758 | orchestrator | Wednesday 11 March 2026 01:00:18 +0000 (0:00:04.053) 0:00:05.990 ******* 2026-03-11 01:03:22.950761 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2026-03-11 01:03:22.950764 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2026-03-11 01:03:22.950767 | orchestrator | 2026-03-11 01:03:22.950772 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2026-03-11 01:03:22.950777 | orchestrator | Wednesday 11 March 2026 01:00:25 +0000 (0:00:07.189) 0:00:13.179 ******* 2026-03-11 01:03:22.950782 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:03:22.950787 | orchestrator | 2026-03-11 01:03:22.950793 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2026-03-11 01:03:22.950808 | orchestrator | Wednesday 11 March 2026 01:00:29 +0000 (0:00:03.930) 0:00:17.110 ******* 2026-03-11 01:03:22.950813 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2026-03-11 01:03:22.950818 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:03:22.950824 | orchestrator | 2026-03-11 01:03:22.950829 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2026-03-11 01:03:22.950835 | orchestrator | Wednesday 11 March 2026 01:00:33 +0000 (0:00:04.360) 0:00:21.470 ******* 2026-03-11 01:03:22.950840 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:03:22.950846 | orchestrator | 2026-03-11 01:03:22.950888 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2026-03-11 01:03:22.950893 | orchestrator | Wednesday 11 March 2026 01:00:37 +0000 (0:00:04.314) 0:00:25.785 ******* 2026-03-11 01:03:22.950898 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2026-03-11 01:03:22.950903 | orchestrator | 2026-03-11 01:03:22.950908 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2026-03-11 01:03:22.950914 | orchestrator | Wednesday 11 March 2026 01:00:42 +0000 (0:00:04.688) 0:00:30.474 ******* 2026-03-11 01:03:22.950928 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.950937 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.950962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.950969 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.950980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.950989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.950995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951001 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951027 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951037 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951051 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951057 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951087 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951097 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951102 | orchestrator | 2026-03-11 01:03:22.951108 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2026-03-11 01:03:22.951113 | orchestrator | Wednesday 11 March 2026 01:00:45 +0000 (0:00:02.652) 0:00:33.126 ******* 2026-03-11 01:03:22.951118 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:22.951124 | orchestrator | 2026-03-11 01:03:22.951129 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2026-03-11 01:03:22.951134 | orchestrator | Wednesday 11 March 2026 01:00:45 +0000 (0:00:00.115) 0:00:33.242 ******* 2026-03-11 01:03:22.951139 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:22.951144 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:22.951149 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:22.951154 | orchestrator | 2026-03-11 01:03:22.951168 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-11 01:03:22.951179 | orchestrator | Wednesday 11 March 2026 01:00:45 +0000 (0:00:00.278) 0:00:33.521 ******* 2026-03-11 01:03:22.951184 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:03:22.951189 | orchestrator | 2026-03-11 01:03:22.951194 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2026-03-11 01:03:22.951202 | orchestrator | Wednesday 11 March 2026 01:00:46 +0000 (0:00:00.687) 0:00:34.208 ******* 2026-03-11 01:03:22.951208 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.951213 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.951252 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.951259 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951264 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951273 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951279 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951293 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951314 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951340 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951355 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951374 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951386 | orchestrator | 2026-03-11 01:03:22.951391 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2026-03-11 01:03:22.951397 | orchestrator | Wednesday 11 March 2026 01:00:52 +0000 (0:00:05.847) 0:00:40.056 ******* 2026-03-11 01:03:22.951405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.951411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:22.951423 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951461 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:22.951469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.951475 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:22.951484 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951509 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951521 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:22.951529 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.951536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:22.951545 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951551 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951572 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951578 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951584 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:22.951591 | orchestrator | 2026-03-11 01:03:22.951596 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2026-03-11 01:03:22.951602 | orchestrator | Wednesday 11 March 2026 01:00:54 +0000 (0:00:01.880) 0:00:41.936 ******* 2026-03-11 01:03:22.951610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.951616 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:22.951625 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951657 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951663 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951669 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:22.951674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.951683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:22.951693 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951735 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:22.951741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.951751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:22.951760 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.951798 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:22.951803 | orchestrator | 2026-03-11 01:03:22.951808 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2026-03-11 01:03:22.951813 | orchestrator | Wednesday 11 March 2026 01:00:56 +0000 (0:00:02.887) 0:00:44.824 ******* 2026-03-11 01:03:22.951818 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.951829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.951834 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.951839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951883 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951927 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.951936 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952020 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952037 | orchestrator | 2026-03-11 01:03:22.952057 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2026-03-11 01:03:22.952063 | orchestrator | Wednesday 11 March 2026 01:01:03 +0000 (0:00:06.768) 0:00:51.593 ******* 2026-03-11 01:03:22.952068 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.952082 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.952088 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.952093 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952143 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952162 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952204 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952212 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952222 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952228 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952233 | orchestrator | 2026-03-11 01:03:22.952238 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2026-03-11 01:03:22.952316 | orchestrator | Wednesday 11 March 2026 01:01:25 +0000 (0:00:22.102) 0:01:13.696 ******* 2026-03-11 01:03:22.952322 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-11 01:03:22.952328 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-11 01:03:22.952334 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2026-03-11 01:03:22.952339 | orchestrator | 2026-03-11 01:03:22.952348 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2026-03-11 01:03:22.952353 | orchestrator | Wednesday 11 March 2026 01:01:32 +0000 (0:00:06.257) 0:01:19.954 ******* 2026-03-11 01:03:22.952362 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-11 01:03:22.952368 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-11 01:03:22.952373 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2026-03-11 01:03:22.952378 | orchestrator | 2026-03-11 01:03:22.952383 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2026-03-11 01:03:22.952388 | orchestrator | Wednesday 11 March 2026 01:01:35 +0000 (0:00:03.444) 0:01:23.398 ******* 2026-03-11 01:03:22.952394 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.952405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.952411 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.952417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952426 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952436 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952466 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952490 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952498 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952504 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952509 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952524 | orchestrator | 2026-03-11 01:03:22.952532 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2026-03-11 01:03:22.952537 | orchestrator | Wednesday 11 March 2026 01:01:39 +0000 (0:00:03.648) 0:01:27.047 ******* 2026-03-11 01:03:22.952543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.952558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.952564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.952570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952607 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952612 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952648 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952653 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952659 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952668 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952673 | orchestrator | 2026-03-11 01:03:22.952679 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-11 01:03:22.952684 | orchestrator | Wednesday 11 March 2026 01:01:41 +0000 (0:00:02.772) 0:01:29.819 ******* 2026-03-11 01:03:22.952689 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:22.952694 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:22.952699 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:22.952704 | orchestrator | 2026-03-11 01:03:22.952710 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2026-03-11 01:03:22.952715 | orchestrator | Wednesday 11 March 2026 01:01:42 +0000 (0:00:01.055) 0:01:30.874 ******* 2026-03-11 01:03:22.952724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.952730 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:22.952739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952744 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952759 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952764 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:22.952773 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.952779 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:22.952787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952793 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952815 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:22.952823 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2026-03-11 01:03:22.952829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2026-03-11 01:03:22.952835 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952848 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:03:22.952860 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:22.952865 | orchestrator | 2026-03-11 01:03:22.952870 | orchestrator | TASK [designate : Check designate containers] ********************************** 2026-03-11 01:03:22.952876 | orchestrator | Wednesday 11 March 2026 01:01:44 +0000 (0:00:01.772) 0:01:32.647 ******* 2026-03-11 01:03:22.952884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.952956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.952976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2026-03-11 01:03:22.952986 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.952991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953002 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953022 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953038 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953072 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953087 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:03:22.953093 | orchestrator | 2026-03-11 01:03:22.953099 | orchestrator | TASK [designate : include_tasks] *********************************************** 2026-03-11 01:03:22.953104 | orchestrator | Wednesday 11 March 2026 01:01:50 +0000 (0:00:05.374) 0:01:38.022 ******* 2026-03-11 01:03:22.953109 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:22.953114 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:22.953120 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:22.953125 | orchestrator | 2026-03-11 01:03:22.953131 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2026-03-11 01:03:22.953137 | orchestrator | Wednesday 11 March 2026 01:01:50 +0000 (0:00:00.700) 0:01:38.723 ******* 2026-03-11 01:03:22.953146 | orchestrator | changed: [testbed-node-0] => (item=designate) 2026-03-11 01:03:22.953152 | orchestrator | 2026-03-11 01:03:22.953158 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2026-03-11 01:03:22.953163 | orchestrator | Wednesday 11 March 2026 01:01:52 +0000 (0:00:02.112) 0:01:40.835 ******* 2026-03-11 01:03:22.953169 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 01:03:22.953174 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2026-03-11 01:03:22.953180 | orchestrator | 2026-03-11 01:03:22.953185 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2026-03-11 01:03:22.953190 | orchestrator | Wednesday 11 March 2026 01:01:55 +0000 (0:00:02.507) 0:01:43.343 ******* 2026-03-11 01:03:22.953195 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:22.953201 | orchestrator | 2026-03-11 01:03:22.953206 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-11 01:03:22.953214 | orchestrator | Wednesday 11 March 2026 01:02:11 +0000 (0:00:16.064) 0:01:59.407 ******* 2026-03-11 01:03:22.953220 | orchestrator | 2026-03-11 01:03:22.953225 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-11 01:03:22.953230 | orchestrator | Wednesday 11 March 2026 01:02:11 +0000 (0:00:00.259) 0:01:59.666 ******* 2026-03-11 01:03:22.953236 | orchestrator | 2026-03-11 01:03:22.953256 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2026-03-11 01:03:22.953261 | orchestrator | Wednesday 11 March 2026 01:02:11 +0000 (0:00:00.120) 0:01:59.787 ******* 2026-03-11 01:03:22.953266 | orchestrator | 2026-03-11 01:03:22.953271 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2026-03-11 01:03:22.953280 | orchestrator | Wednesday 11 March 2026 01:02:11 +0000 (0:00:00.120) 0:01:59.908 ******* 2026-03-11 01:03:22.953286 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:22.953291 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:22.953296 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:22.953302 | orchestrator | 2026-03-11 01:03:22.953308 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2026-03-11 01:03:22.953314 | orchestrator | Wednesday 11 March 2026 01:02:23 +0000 (0:00:11.109) 0:02:11.018 ******* 2026-03-11 01:03:22.953319 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:22.953324 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:22.953331 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:22.953336 | orchestrator | 2026-03-11 01:03:22.953342 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2026-03-11 01:03:22.953347 | orchestrator | Wednesday 11 March 2026 01:02:33 +0000 (0:00:10.507) 0:02:21.526 ******* 2026-03-11 01:03:22.953352 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:22.953358 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:22.953363 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:22.953368 | orchestrator | 2026-03-11 01:03:22.953373 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2026-03-11 01:03:22.953379 | orchestrator | Wednesday 11 March 2026 01:02:43 +0000 (0:00:10.383) 0:02:31.910 ******* 2026-03-11 01:03:22.953384 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:22.953389 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:22.953395 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:22.953400 | orchestrator | 2026-03-11 01:03:22.953405 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2026-03-11 01:03:22.953411 | orchestrator | Wednesday 11 March 2026 01:02:52 +0000 (0:00:08.479) 0:02:40.389 ******* 2026-03-11 01:03:22.953416 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:22.953421 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:22.953426 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:22.953431 | orchestrator | 2026-03-11 01:03:22.953441 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2026-03-11 01:03:22.953446 | orchestrator | Wednesday 11 March 2026 01:03:01 +0000 (0:00:09.409) 0:02:49.798 ******* 2026-03-11 01:03:22.953452 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:22.953457 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:22.953462 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:22.953467 | orchestrator | 2026-03-11 01:03:22.953473 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2026-03-11 01:03:22.953479 | orchestrator | Wednesday 11 March 2026 01:03:12 +0000 (0:00:11.082) 0:03:00.881 ******* 2026-03-11 01:03:22.953485 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:22.953491 | orchestrator | 2026-03-11 01:03:22.953496 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:03:22.953503 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:03:22.953510 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 01:03:22.953515 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 01:03:22.953521 | orchestrator | 2026-03-11 01:03:22.953526 | orchestrator | 2026-03-11 01:03:22.953532 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:03:22.953537 | orchestrator | Wednesday 11 March 2026 01:03:20 +0000 (0:00:07.251) 0:03:08.133 ******* 2026-03-11 01:03:22.953543 | orchestrator | =============================================================================== 2026-03-11 01:03:22.953548 | orchestrator | designate : Copying over designate.conf -------------------------------- 22.10s 2026-03-11 01:03:22.953553 | orchestrator | designate : Running Designate bootstrap container ---------------------- 16.07s 2026-03-11 01:03:22.953563 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 11.11s 2026-03-11 01:03:22.953568 | orchestrator | designate : Restart designate-worker container ------------------------- 11.08s 2026-03-11 01:03:22.953574 | orchestrator | designate : Restart designate-api container ---------------------------- 10.51s 2026-03-11 01:03:22.953579 | orchestrator | designate : Restart designate-central container ------------------------ 10.38s 2026-03-11 01:03:22.953584 | orchestrator | designate : Restart designate-mdns container ---------------------------- 9.41s 2026-03-11 01:03:22.953589 | orchestrator | designate : Restart designate-producer container ------------------------ 8.48s 2026-03-11 01:03:22.953595 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.25s 2026-03-11 01:03:22.953600 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 7.19s 2026-03-11 01:03:22.953606 | orchestrator | designate : Copying over config.json files for services ----------------- 6.77s 2026-03-11 01:03:22.953611 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 6.26s 2026-03-11 01:03:22.953616 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.85s 2026-03-11 01:03:22.953625 | orchestrator | designate : Check designate containers ---------------------------------- 5.37s 2026-03-11 01:03:22.953630 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.69s 2026-03-11 01:03:22.953636 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.36s 2026-03-11 01:03:22.953641 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 4.31s 2026-03-11 01:03:22.953646 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.05s 2026-03-11 01:03:22.953651 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.93s 2026-03-11 01:03:22.953657 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 3.65s 2026-03-11 01:03:22.953663 | orchestrator | 2026-03-11 01:03:22 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:22.953668 | orchestrator | 2026-03-11 01:03:22 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:22.953674 | orchestrator | 2026-03-11 01:03:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:25.986226 | orchestrator | 2026-03-11 01:03:25 | INFO  | Task ffe786bb-559e-47dd-a171-2961e0e53f68 is in state SUCCESS 2026-03-11 01:03:25.987189 | orchestrator | 2026-03-11 01:03:25.987228 | orchestrator | 2026-03-11 01:03:25.987234 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:03:25.987250 | orchestrator | 2026-03-11 01:03:25.987253 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:03:25.987256 | orchestrator | Wednesday 11 March 2026 01:00:04 +0000 (0:00:00.399) 0:00:00.399 ******* 2026-03-11 01:03:25.987260 | orchestrator | ok: [testbed-manager] 2026-03-11 01:03:25.987266 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:03:25.987284 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:03:25.987290 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:03:25.987295 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:03:25.987300 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:03:25.987305 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:03:25.987310 | orchestrator | 2026-03-11 01:03:25.987316 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:03:25.987320 | orchestrator | Wednesday 11 March 2026 01:00:05 +0000 (0:00:00.985) 0:00:01.384 ******* 2026-03-11 01:03:25.987324 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2026-03-11 01:03:25.987328 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2026-03-11 01:03:25.987353 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2026-03-11 01:03:25.987357 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2026-03-11 01:03:25.987371 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2026-03-11 01:03:25.987374 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2026-03-11 01:03:25.987377 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2026-03-11 01:03:25.987380 | orchestrator | 2026-03-11 01:03:25.987383 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2026-03-11 01:03:25.987386 | orchestrator | 2026-03-11 01:03:25.987389 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-11 01:03:25.987393 | orchestrator | Wednesday 11 March 2026 01:00:06 +0000 (0:00:00.811) 0:00:02.196 ******* 2026-03-11 01:03:25.987396 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:03:25.987400 | orchestrator | 2026-03-11 01:03:25.987403 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2026-03-11 01:03:25.987406 | orchestrator | Wednesday 11 March 2026 01:00:08 +0000 (0:00:01.528) 0:00:03.724 ******* 2026-03-11 01:03:25.987411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987463 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 01:03:25.987477 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987583 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987597 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987601 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987657 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987667 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987690 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987698 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987703 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987708 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987714 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987719 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987725 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987743 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 01:03:25.987750 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987778 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987787 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987792 | orchestrator | 2026-03-11 01:03:25.987798 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2026-03-11 01:03:25.987802 | orchestrator | Wednesday 11 March 2026 01:00:11 +0000 (0:00:03.253) 0:00:06.977 ******* 2026-03-11 01:03:25.987805 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:03:25.987808 | orchestrator | 2026-03-11 01:03:25.987812 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2026-03-11 01:03:25.987815 | orchestrator | Wednesday 11 March 2026 01:00:12 +0000 (0:00:01.184) 0:00:08.162 ******* 2026-03-11 01:03:25.987818 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 01:03:25.987822 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987828 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987840 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987844 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987848 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.987851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987854 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987858 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987863 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987872 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987880 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987883 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987890 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987899 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987905 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987910 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 01:03:25.987913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987917 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.987928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987934 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987938 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.987941 | orchestrator | 2026-03-11 01:03:25.987946 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2026-03-11 01:03:25.987951 | orchestrator | Wednesday 11 March 2026 01:00:18 +0000 (0:00:05.543) 0:00:13.705 ******* 2026-03-11 01:03:25.987958 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-11 01:03:25.987965 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.987972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.987981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.987986 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.987994 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.987999 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988007 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-11 01:03:25.988012 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988017 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988026 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:25.988031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.988036 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988050 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988057 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988062 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.988067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988074 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988079 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988089 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:03:25.988108 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:25.988117 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:25.988125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.988133 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988143 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:03:25.988148 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.988157 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988162 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988167 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:03:25.988172 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.988177 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988191 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:03:25.988196 | orchestrator | 2026-03-11 01:03:25.988200 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2026-03-11 01:03:25.988206 | orchestrator | Wednesday 11 March 2026 01:00:19 +0000 (0:00:01.250) 0:00:14.955 ******* 2026-03-11 01:03:25.988213 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.988216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988234 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988251 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2026-03-11 01:03:25.988259 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.988265 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988269 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2026-03-11 01:03:25.988278 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988282 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:25.988285 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.988290 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988300 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988306 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988314 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:03:25.988318 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:25.988328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.988332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988343 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2026-03-11 01:03:25.988347 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:25.988353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.988357 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988363 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988381 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:03:25.988387 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.988393 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988401 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:03:25.988405 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2026-03-11 01:03:25.988409 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2026-03-11 01:03:25.988590 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:03:25.988593 | orchestrator | 2026-03-11 01:03:25.988596 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2026-03-11 01:03:25.988600 | orchestrator | Wednesday 11 March 2026 01:00:21 +0000 (0:00:01.640) 0:00:16.596 ******* 2026-03-11 01:03:25.988609 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 01:03:25.988613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.988616 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.988620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.988623 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.988626 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.988632 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.988636 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.988643 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.988647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.988650 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.988653 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.988657 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.988660 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.988665 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.988673 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 01:03:25.988677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.988681 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.988684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.988687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.988690 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.988695 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.988701 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.988705 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.988709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.988712 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.988715 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.988719 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.988722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.988727 | orchestrator | 2026-03-11 01:03:25.988730 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2026-03-11 01:03:25.988733 | orchestrator | Wednesday 11 March 2026 01:00:27 +0000 (0:00:06.100) 0:00:22.696 ******* 2026-03-11 01:03:25.988736 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 01:03:25.988740 | orchestrator | 2026-03-11 01:03:25.988743 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2026-03-11 01:03:25.988747 | orchestrator | Wednesday 11 March 2026 01:00:28 +0000 (0:00:01.108) 0:00:23.804 ******* 2026-03-11 01:03:25.988751 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103585, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988756 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103585, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988760 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1103602, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1200194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988763 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103585, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988766 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103585, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988769 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103585, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988778 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1103602, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1200194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988781 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103585, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988787 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1103581, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1113877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988790 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1103602, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1200194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988794 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103596, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1170194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988797 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1103581, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1113877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988800 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1103602, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1200194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988808 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1103585, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.988811 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1103602, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1200194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988816 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1103581, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1113877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988819 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1103602, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1200194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988823 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103576, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988826 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1103581, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1113877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988831 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1103581, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1113877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988835 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103586, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1147146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988840 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103596, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1170194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988845 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103596, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1170194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988848 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1103581, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1113877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988851 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103596, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1170194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988854 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103596, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1170194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988860 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1103591, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1164565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.988863 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103596, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1170194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989462 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12944, 'inode': 1103602, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1200194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.989485 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103587, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1150193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989489 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103576, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989492 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103576, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989495 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103576, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989503 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103576, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989507 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103582, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989514 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103576, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989522 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103586, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1147146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989529 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103600, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1190193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989536 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103586, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1147146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989541 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103586, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1147146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989551 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103586, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1147146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989557 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103573, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1099648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989567 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103586, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1147146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989574 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1103591, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1164565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989577 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103615, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1221879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989580 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1103591, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1164565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989586 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1103591, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1164565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989589 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1103591, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1164565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989592 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 56929, 'inode': 1103581, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1113877, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.989598 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1103591, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1164565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989604 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103598, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1185036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989607 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103587, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1150193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989610 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103587, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1150193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989616 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103587, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1150193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989619 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103587, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1150193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989622 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103579, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.111154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989628 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103587, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1150193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989634 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103582, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989638 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1103575, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989641 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103582, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989646 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103600, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1190193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989650 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103589, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1160192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989653 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103582, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989659 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103582, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989664 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103582, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989668 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103588, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1159608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989671 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103600, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1190193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989677 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103573, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1099648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989680 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103600, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1190193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989683 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103600, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1190193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989689 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1103596, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1170194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.989701 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103614, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1218452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989706 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:25.989713 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103600, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1190193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989724 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103573, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1099648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989730 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103573, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1099648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989735 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103573, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1099648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989740 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103615, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1221879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989749 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103615, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1221879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989756 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103615, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1221879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989761 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103573, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1099648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989770 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103615, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1221879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989880 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103598, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1185036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989888 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103615, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1221879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989894 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103598, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1185036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989903 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103598, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1185036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989912 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103598, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1185036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989916 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1103576, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110488, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.989923 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103579, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.111154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989926 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103598, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1185036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989929 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103579, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.111154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989932 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1103575, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989937 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103579, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.111154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989943 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1103575, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989948 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103579, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.111154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989951 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103589, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1160192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989955 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103579, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.111154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989958 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103589, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1160192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989961 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1103575, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989966 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103588, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1159608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989971 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1103575, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989977 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1103586, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1147146, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.989981 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103614, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1218452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989984 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103589, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1160192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989987 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:03:25.989990 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1103575, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989994 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103588, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1159608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.989998 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103589, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1160192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.990003 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103588, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1159608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.990036 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103589, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1160192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.990041 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103614, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1218452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.990044 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:25.990048 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103588, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1159608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.990051 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103588, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1159608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.990054 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103614, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1218452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.990057 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:25.990063 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103614, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1218452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.990069 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:03:25.990075 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103614, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1218452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2026-03-11 01:03:25.990078 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:03:25.990081 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 14018, 'inode': 1103591, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1164565, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.990084 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1103587, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1150193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.990088 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1103582, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1140194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.990091 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103600, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1190193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.990094 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103573, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1099648, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.990099 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1103615, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1221879, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.990115 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1103598, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1185036, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.990119 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1103579, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.111154, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.990122 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5065, 'inode': 1103575, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.110211, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.990125 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1103589, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1160192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.990128 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1103588, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1159608, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.990132 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1103614, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1218452, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2026-03-11 01:03:25.990137 | orchestrator | 2026-03-11 01:03:25.990140 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2026-03-11 01:03:25.990144 | orchestrator | Wednesday 11 March 2026 01:00:51 +0000 (0:00:23.163) 0:00:46.968 ******* 2026-03-11 01:03:25.990147 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 01:03:25.990150 | orchestrator | 2026-03-11 01:03:25.990153 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2026-03-11 01:03:25.990158 | orchestrator | Wednesday 11 March 2026 01:00:53 +0000 (0:00:01.745) 0:00:48.713 ******* 2026-03-11 01:03:25.990162 | orchestrator | [WARNING]: Skipped 2026-03-11 01:03:25.990165 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990168 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2026-03-11 01:03:25.990171 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990174 | orchestrator | manager/prometheus.yml.d' is not a directory 2026-03-11 01:03:25.990178 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 01:03:25.990181 | orchestrator | [WARNING]: Skipped 2026-03-11 01:03:25.990184 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990187 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2026-03-11 01:03:25.990190 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990193 | orchestrator | node-0/prometheus.yml.d' is not a directory 2026-03-11 01:03:25.990197 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:03:25.990201 | orchestrator | [WARNING]: Skipped 2026-03-11 01:03:25.990205 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990208 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2026-03-11 01:03:25.990211 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990214 | orchestrator | node-1/prometheus.yml.d' is not a directory 2026-03-11 01:03:25.990217 | orchestrator | ok: [testbed-node-1 -> localhost] 2026-03-11 01:03:25.990220 | orchestrator | [WARNING]: Skipped 2026-03-11 01:03:25.990223 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990226 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2026-03-11 01:03:25.990229 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990232 | orchestrator | node-2/prometheus.yml.d' is not a directory 2026-03-11 01:03:25.990235 | orchestrator | ok: [testbed-node-2 -> localhost] 2026-03-11 01:03:25.990251 | orchestrator | [WARNING]: Skipped 2026-03-11 01:03:25.990255 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990260 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2026-03-11 01:03:25.990265 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990270 | orchestrator | node-3/prometheus.yml.d' is not a directory 2026-03-11 01:03:25.990275 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-11 01:03:25.990279 | orchestrator | [WARNING]: Skipped 2026-03-11 01:03:25.990283 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990288 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2026-03-11 01:03:25.990293 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990297 | orchestrator | node-4/prometheus.yml.d' is not a directory 2026-03-11 01:03:25.990302 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-11 01:03:25.990307 | orchestrator | [WARNING]: Skipped 2026-03-11 01:03:25.990311 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990314 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2026-03-11 01:03:25.990319 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2026-03-11 01:03:25.990322 | orchestrator | node-5/prometheus.yml.d' is not a directory 2026-03-11 01:03:25.990325 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-11 01:03:25.990328 | orchestrator | 2026-03-11 01:03:25.990332 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2026-03-11 01:03:25.990335 | orchestrator | Wednesday 11 March 2026 01:00:56 +0000 (0:00:03.583) 0:00:52.297 ******* 2026-03-11 01:03:25.990338 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-11 01:03:25.990341 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:25.990344 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-11 01:03:25.990353 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-11 01:03:25.990356 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:25.990364 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:25.990367 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-11 01:03:25.990370 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:03:25.990373 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-11 01:03:25.990376 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:03:25.990379 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2026-03-11 01:03:25.990382 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:03:25.990385 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2026-03-11 01:03:25.990388 | orchestrator | 2026-03-11 01:03:25.990392 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2026-03-11 01:03:25.990395 | orchestrator | Wednesday 11 March 2026 01:01:21 +0000 (0:00:24.392) 0:01:16.689 ******* 2026-03-11 01:03:25.990398 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-11 01:03:25.990401 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:25.990406 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-11 01:03:25.990409 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:25.990413 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-11 01:03:25.990420 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:25.990423 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-11 01:03:25.990426 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:03:25.990429 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-11 01:03:25.990432 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:03:25.990435 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2026-03-11 01:03:25.990438 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:03:25.990441 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2026-03-11 01:03:25.990444 | orchestrator | 2026-03-11 01:03:25.990449 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2026-03-11 01:03:25.990485 | orchestrator | Wednesday 11 March 2026 01:01:25 +0000 (0:00:04.729) 0:01:21.419 ******* 2026-03-11 01:03:25.990490 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-11 01:03:25.990498 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-11 01:03:25.990501 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-11 01:03:25.990508 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:25.990511 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2026-03-11 01:03:25.990514 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:25.990520 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:25.990524 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-11 01:03:25.990532 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:03:25.990538 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-11 01:03:25.990543 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:03:25.990548 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2026-03-11 01:03:25.990553 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:03:25.990557 | orchestrator | 2026-03-11 01:03:25.990560 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2026-03-11 01:03:25.990563 | orchestrator | Wednesday 11 March 2026 01:01:29 +0000 (0:00:03.456) 0:01:24.875 ******* 2026-03-11 01:03:25.990596 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 01:03:25.990601 | orchestrator | 2026-03-11 01:03:25.990604 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2026-03-11 01:03:25.990607 | orchestrator | Wednesday 11 March 2026 01:01:29 +0000 (0:00:00.647) 0:01:25.523 ******* 2026-03-11 01:03:25.990610 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:03:25.990613 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:25.990617 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:25.990620 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:25.990623 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:03:25.990626 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:03:25.990629 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:03:25.990632 | orchestrator | 2026-03-11 01:03:25.990635 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2026-03-11 01:03:25.990638 | orchestrator | Wednesday 11 March 2026 01:01:30 +0000 (0:00:00.634) 0:01:26.157 ******* 2026-03-11 01:03:25.990641 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:03:25.990645 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:03:25.990648 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:03:25.990651 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:03:25.990654 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:25.990657 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:25.990660 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:25.990663 | orchestrator | 2026-03-11 01:03:25.990666 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2026-03-11 01:03:25.990669 | orchestrator | Wednesday 11 March 2026 01:01:34 +0000 (0:00:03.953) 0:01:30.111 ******* 2026-03-11 01:03:25.990672 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:03:25.990676 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:03:25.990679 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:03:25.990682 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:25.990685 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:03:25.990688 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:25.990691 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:03:25.990694 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:25.990697 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:03:25.990704 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:03:25.990710 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:03:25.990713 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:03:25.990716 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2026-03-11 01:03:25.990719 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:03:25.990722 | orchestrator | 2026-03-11 01:03:25.990725 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2026-03-11 01:03:25.990728 | orchestrator | Wednesday 11 March 2026 01:01:37 +0000 (0:00:02.538) 0:01:32.649 ******* 2026-03-11 01:03:25.990731 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-11 01:03:25.990735 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:25.990738 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-11 01:03:25.990741 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:25.990747 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-11 01:03:25.990750 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:25.990753 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2026-03-11 01:03:25.990756 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-11 01:03:25.990759 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:03:25.990762 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-11 01:03:25.990765 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:03:25.990768 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2026-03-11 01:03:25.990772 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:03:25.990775 | orchestrator | 2026-03-11 01:03:25.990778 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2026-03-11 01:03:25.990781 | orchestrator | Wednesday 11 March 2026 01:01:38 +0000 (0:00:01.618) 0:01:34.268 ******* 2026-03-11 01:03:25.990784 | orchestrator | [WARNING]: Skipped 2026-03-11 01:03:25.990787 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2026-03-11 01:03:25.990790 | orchestrator | due to this access issue: 2026-03-11 01:03:25.990794 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2026-03-11 01:03:25.990797 | orchestrator | not a directory 2026-03-11 01:03:25.990800 | orchestrator | ok: [testbed-manager -> localhost] 2026-03-11 01:03:25.990803 | orchestrator | 2026-03-11 01:03:25.990806 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2026-03-11 01:03:25.990809 | orchestrator | Wednesday 11 March 2026 01:01:40 +0000 (0:00:01.968) 0:01:36.236 ******* 2026-03-11 01:03:25.990812 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:03:25.990815 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:25.990818 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:25.990822 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:25.990825 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:03:25.990828 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:03:25.990831 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:03:25.990834 | orchestrator | 2026-03-11 01:03:25.990837 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2026-03-11 01:03:25.990840 | orchestrator | Wednesday 11 March 2026 01:01:41 +0000 (0:00:00.696) 0:01:36.933 ******* 2026-03-11 01:03:25.990843 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:03:25.990846 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:03:25.990849 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:03:25.990855 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:03:25.990858 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:03:25.990861 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:03:25.990864 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:03:25.990868 | orchestrator | 2026-03-11 01:03:25.990871 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2026-03-11 01:03:25.990874 | orchestrator | Wednesday 11 March 2026 01:01:42 +0000 (0:00:00.730) 0:01:37.664 ******* 2026-03-11 01:03:25.990878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.990883 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2026-03-11 01:03:25.990887 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.990893 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.990896 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.990900 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.990903 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.990910 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.990913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.990918 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.990922 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.990932 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.990940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2026-03-11 01:03:25.990946 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.990955 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.990961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.990966 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.990974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.990979 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.990987 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2026-03-11 01:03:25.990993 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.991002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.991008 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.991013 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.991021 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.991026 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2026-03-11 01:03:25.991033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.991039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.991047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2026-03-11 01:03:25.991052 | orchestrator | 2026-03-11 01:03:25.991057 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2026-03-11 01:03:25.991062 | orchestrator | Wednesday 11 March 2026 01:01:47 +0000 (0:00:04.961) 0:01:42.625 ******* 2026-03-11 01:03:25.991067 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2026-03-11 01:03:25.991072 | orchestrator | skipping: [testbed-manager] 2026-03-11 01:03:25.991077 | orchestrator | 2026-03-11 01:03:25.991082 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:03:25.991087 | orchestrator | Wednesday 11 March 2026 01:01:49 +0000 (0:00:02.031) 0:01:44.657 ******* 2026-03-11 01:03:25.991092 | orchestrator | 2026-03-11 01:03:25.991098 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:03:25.991103 | orchestrator | Wednesday 11 March 2026 01:01:49 +0000 (0:00:00.168) 0:01:44.826 ******* 2026-03-11 01:03:25.991108 | orchestrator | 2026-03-11 01:03:25.991113 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:03:25.991119 | orchestrator | Wednesday 11 March 2026 01:01:49 +0000 (0:00:00.132) 0:01:44.958 ******* 2026-03-11 01:03:25.991122 | orchestrator | 2026-03-11 01:03:25.991125 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:03:25.991128 | orchestrator | Wednesday 11 March 2026 01:01:49 +0000 (0:00:00.144) 0:01:45.106 ******* 2026-03-11 01:03:25.991131 | orchestrator | 2026-03-11 01:03:25.991134 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:03:25.991137 | orchestrator | Wednesday 11 March 2026 01:01:49 +0000 (0:00:00.238) 0:01:45.345 ******* 2026-03-11 01:03:25.991140 | orchestrator | 2026-03-11 01:03:25.991143 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:03:25.991146 | orchestrator | Wednesday 11 March 2026 01:01:49 +0000 (0:00:00.052) 0:01:45.397 ******* 2026-03-11 01:03:25.991149 | orchestrator | 2026-03-11 01:03:25.991152 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2026-03-11 01:03:25.991156 | orchestrator | Wednesday 11 March 2026 01:01:49 +0000 (0:00:00.049) 0:01:45.447 ******* 2026-03-11 01:03:25.991159 | orchestrator | 2026-03-11 01:03:25.991162 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2026-03-11 01:03:25.991165 | orchestrator | Wednesday 11 March 2026 01:01:49 +0000 (0:00:00.065) 0:01:45.512 ******* 2026-03-11 01:03:25.991168 | orchestrator | changed: [testbed-manager] 2026-03-11 01:03:25.991171 | orchestrator | 2026-03-11 01:03:25.991174 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2026-03-11 01:03:25.991177 | orchestrator | Wednesday 11 March 2026 01:02:02 +0000 (0:00:12.146) 0:01:57.659 ******* 2026-03-11 01:03:25.991180 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:25.991183 | orchestrator | changed: [testbed-manager] 2026-03-11 01:03:25.991188 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:25.991192 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:25.991195 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:03:25.991198 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:03:25.991201 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:03:25.991204 | orchestrator | 2026-03-11 01:03:25.991207 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2026-03-11 01:03:25.991210 | orchestrator | Wednesday 11 March 2026 01:02:17 +0000 (0:00:15.800) 0:02:13.460 ******* 2026-03-11 01:03:25.991216 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:25.991219 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:25.991222 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:25.991225 | orchestrator | 2026-03-11 01:03:25.991228 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2026-03-11 01:03:25.991231 | orchestrator | Wednesday 11 March 2026 01:02:28 +0000 (0:00:10.337) 0:02:23.797 ******* 2026-03-11 01:03:25.991234 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:25.991298 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:25.991303 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:25.991307 | orchestrator | 2026-03-11 01:03:25.991314 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2026-03-11 01:03:25.991318 | orchestrator | Wednesday 11 March 2026 01:02:33 +0000 (0:00:05.532) 0:02:29.330 ******* 2026-03-11 01:03:25.991322 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:25.991325 | orchestrator | changed: [testbed-manager] 2026-03-11 01:03:25.991329 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:03:25.991333 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:25.991336 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:03:25.991339 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:03:25.991343 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:25.991347 | orchestrator | 2026-03-11 01:03:25.991350 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2026-03-11 01:03:25.991354 | orchestrator | Wednesday 11 March 2026 01:02:51 +0000 (0:00:17.525) 0:02:46.855 ******* 2026-03-11 01:03:25.991357 | orchestrator | changed: [testbed-manager] 2026-03-11 01:03:25.991361 | orchestrator | 2026-03-11 01:03:25.991365 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2026-03-11 01:03:25.991368 | orchestrator | Wednesday 11 March 2026 01:02:58 +0000 (0:00:07.334) 0:02:54.190 ******* 2026-03-11 01:03:25.991372 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:03:25.991375 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:03:25.991379 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:03:25.991383 | orchestrator | 2026-03-11 01:03:25.991387 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2026-03-11 01:03:25.991390 | orchestrator | Wednesday 11 March 2026 01:03:08 +0000 (0:00:09.942) 0:03:04.132 ******* 2026-03-11 01:03:25.991394 | orchestrator | changed: [testbed-manager] 2026-03-11 01:03:25.991397 | orchestrator | 2026-03-11 01:03:25.991401 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2026-03-11 01:03:25.991405 | orchestrator | Wednesday 11 March 2026 01:03:13 +0000 (0:00:05.008) 0:03:09.141 ******* 2026-03-11 01:03:25.991408 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:03:25.991412 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:03:25.991416 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:03:25.991419 | orchestrator | 2026-03-11 01:03:25.991423 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:03:25.991427 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2026-03-11 01:03:25.991432 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-11 01:03:25.991435 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-11 01:03:25.991439 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-11 01:03:25.991443 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-11 01:03:25.991446 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-11 01:03:25.991453 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-11 01:03:25.991456 | orchestrator | 2026-03-11 01:03:25.991460 | orchestrator | 2026-03-11 01:03:25.991464 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:03:25.991467 | orchestrator | Wednesday 11 March 2026 01:03:23 +0000 (0:00:09.840) 0:03:18.981 ******* 2026-03-11 01:03:25.991471 | orchestrator | =============================================================================== 2026-03-11 01:03:25.991475 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 24.39s 2026-03-11 01:03:25.991478 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 23.16s 2026-03-11 01:03:25.991482 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 17.52s 2026-03-11 01:03:25.991486 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 15.80s 2026-03-11 01:03:25.991490 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 12.15s 2026-03-11 01:03:25.991493 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 10.34s 2026-03-11 01:03:25.991499 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 9.94s 2026-03-11 01:03:25.991503 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 9.84s 2026-03-11 01:03:25.991507 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 7.34s 2026-03-11 01:03:25.991510 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.10s 2026-03-11 01:03:25.991514 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 5.54s 2026-03-11 01:03:25.991519 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ------------ 5.53s 2026-03-11 01:03:25.991525 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.01s 2026-03-11 01:03:25.991533 | orchestrator | prometheus : Check prometheus containers -------------------------------- 4.96s 2026-03-11 01:03:25.991538 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.73s 2026-03-11 01:03:25.991543 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 3.95s 2026-03-11 01:03:25.991551 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 3.58s 2026-03-11 01:03:25.991557 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.46s 2026-03-11 01:03:25.991562 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.25s 2026-03-11 01:03:25.991567 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 2.54s 2026-03-11 01:03:25.991573 | orchestrator | 2026-03-11 01:03:25 | INFO  | Task f20f9d77-f53b-400b-a92f-43b9b9ecfdfc is in state STARTED 2026-03-11 01:03:25.991578 | orchestrator | 2026-03-11 01:03:25 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:03:25.991584 | orchestrator | 2026-03-11 01:03:25 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:25.991589 | orchestrator | 2026-03-11 01:03:25 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:25.991593 | orchestrator | 2026-03-11 01:03:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:29.044918 | orchestrator | 2026-03-11 01:03:29 | INFO  | Task f20f9d77-f53b-400b-a92f-43b9b9ecfdfc is in state STARTED 2026-03-11 01:03:29.045249 | orchestrator | 2026-03-11 01:03:29 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:03:29.045927 | orchestrator | 2026-03-11 01:03:29 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:29.046598 | orchestrator | 2026-03-11 01:03:29 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:29.046649 | orchestrator | 2026-03-11 01:03:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:32.140549 | orchestrator | 2026-03-11 01:03:32 | INFO  | Task f20f9d77-f53b-400b-a92f-43b9b9ecfdfc is in state STARTED 2026-03-11 01:03:32.141265 | orchestrator | 2026-03-11 01:03:32 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:03:32.142036 | orchestrator | 2026-03-11 01:03:32 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:32.142936 | orchestrator | 2026-03-11 01:03:32 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:32.142978 | orchestrator | 2026-03-11 01:03:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:35.168580 | orchestrator | 2026-03-11 01:03:35 | INFO  | Task f20f9d77-f53b-400b-a92f-43b9b9ecfdfc is in state STARTED 2026-03-11 01:03:35.169511 | orchestrator | 2026-03-11 01:03:35 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:03:35.171089 | orchestrator | 2026-03-11 01:03:35 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:35.171136 | orchestrator | 2026-03-11 01:03:35 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:35.171143 | orchestrator | 2026-03-11 01:03:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:38.204377 | orchestrator | 2026-03-11 01:03:38 | INFO  | Task f20f9d77-f53b-400b-a92f-43b9b9ecfdfc is in state STARTED 2026-03-11 01:03:38.206583 | orchestrator | 2026-03-11 01:03:38 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:03:38.208399 | orchestrator | 2026-03-11 01:03:38 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:38.210188 | orchestrator | 2026-03-11 01:03:38 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:38.210294 | orchestrator | 2026-03-11 01:03:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:41.243057 | orchestrator | 2026-03-11 01:03:41 | INFO  | Task f20f9d77-f53b-400b-a92f-43b9b9ecfdfc is in state STARTED 2026-03-11 01:03:41.243639 | orchestrator | 2026-03-11 01:03:41 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:03:41.244675 | orchestrator | 2026-03-11 01:03:41 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:41.247807 | orchestrator | 2026-03-11 01:03:41 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:41.247858 | orchestrator | 2026-03-11 01:03:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:44.282567 | orchestrator | 2026-03-11 01:03:44 | INFO  | Task f20f9d77-f53b-400b-a92f-43b9b9ecfdfc is in state STARTED 2026-03-11 01:03:44.287681 | orchestrator | 2026-03-11 01:03:44 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:03:44.288963 | orchestrator | 2026-03-11 01:03:44 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:44.290336 | orchestrator | 2026-03-11 01:03:44 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:44.290415 | orchestrator | 2026-03-11 01:03:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:47.341939 | orchestrator | 2026-03-11 01:03:47 | INFO  | Task f20f9d77-f53b-400b-a92f-43b9b9ecfdfc is in state STARTED 2026-03-11 01:03:47.343659 | orchestrator | 2026-03-11 01:03:47 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:03:47.347480 | orchestrator | 2026-03-11 01:03:47 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:47.349556 | orchestrator | 2026-03-11 01:03:47 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:47.349621 | orchestrator | 2026-03-11 01:03:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:50.401289 | orchestrator | 2026-03-11 01:03:50 | INFO  | Task f20f9d77-f53b-400b-a92f-43b9b9ecfdfc is in state STARTED 2026-03-11 01:03:50.401355 | orchestrator | 2026-03-11 01:03:50 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:03:50.401364 | orchestrator | 2026-03-11 01:03:50 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:50.401369 | orchestrator | 2026-03-11 01:03:50 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:50.401373 | orchestrator | 2026-03-11 01:03:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:53.419525 | orchestrator | 2026-03-11 01:03:53 | INFO  | Task f20f9d77-f53b-400b-a92f-43b9b9ecfdfc is in state STARTED 2026-03-11 01:03:53.419642 | orchestrator | 2026-03-11 01:03:53 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:03:53.420508 | orchestrator | 2026-03-11 01:03:53 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:53.420853 | orchestrator | 2026-03-11 01:03:53 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:53.420893 | orchestrator | 2026-03-11 01:03:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:56.461816 | orchestrator | 2026-03-11 01:03:56 | INFO  | Task f20f9d77-f53b-400b-a92f-43b9b9ecfdfc is in state SUCCESS 2026-03-11 01:03:56.465146 | orchestrator | 2026-03-11 01:03:56 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:03:56.468378 | orchestrator | 2026-03-11 01:03:56 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:03:56.471472 | orchestrator | 2026-03-11 01:03:56 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:56.471519 | orchestrator | 2026-03-11 01:03:56 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:56.471987 | orchestrator | 2026-03-11 01:03:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:03:59.500144 | orchestrator | 2026-03-11 01:03:59 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:03:59.500874 | orchestrator | 2026-03-11 01:03:59 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:03:59.502436 | orchestrator | 2026-03-11 01:03:59 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:03:59.503044 | orchestrator | 2026-03-11 01:03:59 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:03:59.503149 | orchestrator | 2026-03-11 01:03:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:02.538505 | orchestrator | 2026-03-11 01:04:02 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:02.539231 | orchestrator | 2026-03-11 01:04:02 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:02.541327 | orchestrator | 2026-03-11 01:04:02 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:04:02.545253 | orchestrator | 2026-03-11 01:04:02 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:04:02.546319 | orchestrator | 2026-03-11 01:04:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:05.577582 | orchestrator | 2026-03-11 01:04:05 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:05.578788 | orchestrator | 2026-03-11 01:04:05 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:05.580173 | orchestrator | 2026-03-11 01:04:05 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state STARTED 2026-03-11 01:04:05.581568 | orchestrator | 2026-03-11 01:04:05 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:04:05.582131 | orchestrator | 2026-03-11 01:04:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:08.618278 | orchestrator | 2026-03-11 01:04:08 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:08.618435 | orchestrator | 2026-03-11 01:04:08 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:08.619458 | orchestrator | 2026-03-11 01:04:08 | INFO  | Task 70a4ff61-142c-4aed-86ee-298e5ac9efcc is in state SUCCESS 2026-03-11 01:04:08.619618 | orchestrator | 2026-03-11 01:04:08.619634 | orchestrator | 2026-03-11 01:04:08.619640 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:04:08.619646 | orchestrator | 2026-03-11 01:04:08.619652 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:04:08.619658 | orchestrator | Wednesday 11 March 2026 01:03:27 +0000 (0:00:00.248) 0:00:00.248 ******* 2026-03-11 01:04:08.619663 | orchestrator | ok: [testbed-manager] 2026-03-11 01:04:08.619669 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:04:08.619675 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:04:08.619680 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:04:08.619684 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:04:08.619729 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:04:08.619737 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:04:08.619742 | orchestrator | 2026-03-11 01:04:08.619748 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:04:08.619753 | orchestrator | Wednesday 11 March 2026 01:03:29 +0000 (0:00:01.047) 0:00:01.296 ******* 2026-03-11 01:04:08.619757 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:08.619763 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:08.619769 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:08.619774 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:08.619779 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:08.619785 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:08.619790 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2026-03-11 01:04:08.619795 | orchestrator | 2026-03-11 01:04:08.619801 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2026-03-11 01:04:08.619806 | orchestrator | 2026-03-11 01:04:08.619812 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2026-03-11 01:04:08.619817 | orchestrator | Wednesday 11 March 2026 01:03:29 +0000 (0:00:00.897) 0:00:02.193 ******* 2026-03-11 01:04:08.619823 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:04:08.619830 | orchestrator | 2026-03-11 01:04:08.619835 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2026-03-11 01:04:08.619840 | orchestrator | Wednesday 11 March 2026 01:03:31 +0000 (0:00:01.590) 0:00:03.784 ******* 2026-03-11 01:04:08.619845 | orchestrator | changed: [testbed-manager] => (item=swift (object-store)) 2026-03-11 01:04:08.619848 | orchestrator | 2026-03-11 01:04:08.619851 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2026-03-11 01:04:08.619854 | orchestrator | Wednesday 11 March 2026 01:03:34 +0000 (0:00:02.621) 0:00:06.406 ******* 2026-03-11 01:04:08.619875 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2026-03-11 01:04:08.619883 | orchestrator | changed: [testbed-manager] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2026-03-11 01:04:08.619888 | orchestrator | 2026-03-11 01:04:08.619893 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2026-03-11 01:04:08.619898 | orchestrator | Wednesday 11 March 2026 01:03:39 +0000 (0:00:05.317) 0:00:11.723 ******* 2026-03-11 01:04:08.619904 | orchestrator | ok: [testbed-manager] => (item=service) 2026-03-11 01:04:08.619909 | orchestrator | 2026-03-11 01:04:08.619914 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2026-03-11 01:04:08.619919 | orchestrator | Wednesday 11 March 2026 01:03:42 +0000 (0:00:02.629) 0:00:14.353 ******* 2026-03-11 01:04:08.619925 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service) 2026-03-11 01:04:08.619930 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:04:08.619937 | orchestrator | 2026-03-11 01:04:08.619940 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2026-03-11 01:04:08.619946 | orchestrator | Wednesday 11 March 2026 01:03:45 +0000 (0:00:03.205) 0:00:17.559 ******* 2026-03-11 01:04:08.619951 | orchestrator | ok: [testbed-manager] => (item=admin) 2026-03-11 01:04:08.619956 | orchestrator | changed: [testbed-manager] => (item=ResellerAdmin) 2026-03-11 01:04:08.619962 | orchestrator | 2026-03-11 01:04:08.619968 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2026-03-11 01:04:08.619972 | orchestrator | Wednesday 11 March 2026 01:03:51 +0000 (0:00:06.067) 0:00:23.626 ******* 2026-03-11 01:04:08.619978 | orchestrator | changed: [testbed-manager] => (item=ceph_rgw -> service -> admin) 2026-03-11 01:04:08.619983 | orchestrator | 2026-03-11 01:04:08.619988 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:04:08.619993 | orchestrator | testbed-manager : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:08.620009 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:08.620015 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:08.620020 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:08.620098 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:08.620114 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:08.620120 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:04:08.620126 | orchestrator | 2026-03-11 01:04:08.620131 | orchestrator | 2026-03-11 01:04:08.620137 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:04:08.620326 | orchestrator | Wednesday 11 March 2026 01:03:55 +0000 (0:00:03.746) 0:00:27.373 ******* 2026-03-11 01:04:08.620334 | orchestrator | =============================================================================== 2026-03-11 01:04:08.620378 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.07s 2026-03-11 01:04:08.620386 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 5.32s 2026-03-11 01:04:08.620391 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 3.75s 2026-03-11 01:04:08.620396 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.21s 2026-03-11 01:04:08.620411 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 2.63s 2026-03-11 01:04:08.620416 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 2.62s 2026-03-11 01:04:08.620422 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.59s 2026-03-11 01:04:08.620461 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.05s 2026-03-11 01:04:08.620467 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.90s 2026-03-11 01:04:08.620472 | orchestrator | 2026-03-11 01:04:08.620483 | orchestrator | 2026-03-11 01:04:08.620488 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:04:08.620493 | orchestrator | 2026-03-11 01:04:08.620498 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:04:08.620504 | orchestrator | Wednesday 11 March 2026 01:03:08 +0000 (0:00:00.282) 0:00:00.282 ******* 2026-03-11 01:04:08.620509 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:04:08.620515 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:04:08.620520 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:04:08.620526 | orchestrator | 2026-03-11 01:04:08.620531 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:04:08.620536 | orchestrator | Wednesday 11 March 2026 01:03:08 +0000 (0:00:00.285) 0:00:00.568 ******* 2026-03-11 01:04:08.620541 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2026-03-11 01:04:08.620546 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2026-03-11 01:04:08.620552 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2026-03-11 01:04:08.620557 | orchestrator | 2026-03-11 01:04:08.620563 | orchestrator | PLAY [Apply role placement] **************************************************** 2026-03-11 01:04:08.620569 | orchestrator | 2026-03-11 01:04:08.620574 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-11 01:04:08.620579 | orchestrator | Wednesday 11 March 2026 01:03:09 +0000 (0:00:00.356) 0:00:00.924 ******* 2026-03-11 01:04:08.620584 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:04:08.620590 | orchestrator | 2026-03-11 01:04:08.620595 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2026-03-11 01:04:08.620600 | orchestrator | Wednesday 11 March 2026 01:03:09 +0000 (0:00:00.463) 0:00:01.388 ******* 2026-03-11 01:04:08.620605 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2026-03-11 01:04:08.620611 | orchestrator | 2026-03-11 01:04:08.620616 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2026-03-11 01:04:08.620621 | orchestrator | Wednesday 11 March 2026 01:03:13 +0000 (0:00:03.395) 0:00:04.783 ******* 2026-03-11 01:04:08.620626 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2026-03-11 01:04:08.620632 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2026-03-11 01:04:08.620637 | orchestrator | 2026-03-11 01:04:08.620642 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2026-03-11 01:04:08.620647 | orchestrator | Wednesday 11 March 2026 01:03:19 +0000 (0:00:06.231) 0:00:11.015 ******* 2026-03-11 01:04:08.620653 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:04:08.620658 | orchestrator | 2026-03-11 01:04:08.620664 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2026-03-11 01:04:08.620669 | orchestrator | Wednesday 11 March 2026 01:03:22 +0000 (0:00:03.176) 0:00:14.191 ******* 2026-03-11 01:04:08.620674 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2026-03-11 01:04:08.620679 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:04:08.620685 | orchestrator | 2026-03-11 01:04:08.620690 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2026-03-11 01:04:08.620702 | orchestrator | Wednesday 11 March 2026 01:03:25 +0000 (0:00:03.430) 0:00:17.622 ******* 2026-03-11 01:04:08.620713 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:04:08.620718 | orchestrator | 2026-03-11 01:04:08.620723 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2026-03-11 01:04:08.620728 | orchestrator | Wednesday 11 March 2026 01:03:28 +0000 (0:00:02.972) 0:00:20.594 ******* 2026-03-11 01:04:08.620734 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2026-03-11 01:04:08.620739 | orchestrator | 2026-03-11 01:04:08.620744 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-11 01:04:08.620750 | orchestrator | Wednesday 11 March 2026 01:03:32 +0000 (0:00:03.695) 0:00:24.290 ******* 2026-03-11 01:04:08.620755 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:08.620761 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:08.620766 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:08.620771 | orchestrator | 2026-03-11 01:04:08.620776 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2026-03-11 01:04:08.620782 | orchestrator | Wednesday 11 March 2026 01:03:32 +0000 (0:00:00.258) 0:00:24.549 ******* 2026-03-11 01:04:08.620790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.620804 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.620810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.620816 | orchestrator | 2026-03-11 01:04:08.620821 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2026-03-11 01:04:08.620830 | orchestrator | Wednesday 11 March 2026 01:03:33 +0000 (0:00:00.802) 0:00:25.351 ******* 2026-03-11 01:04:08.620836 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:08.620842 | orchestrator | 2026-03-11 01:04:08.620847 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2026-03-11 01:04:08.620852 | orchestrator | Wednesday 11 March 2026 01:03:33 +0000 (0:00:00.112) 0:00:25.463 ******* 2026-03-11 01:04:08.620857 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:08.620862 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:08.620868 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:08.620873 | orchestrator | 2026-03-11 01:04:08.620878 | orchestrator | TASK [placement : include_tasks] *********************************************** 2026-03-11 01:04:08.620884 | orchestrator | Wednesday 11 March 2026 01:03:34 +0000 (0:00:00.440) 0:00:25.903 ******* 2026-03-11 01:04:08.620893 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:04:08.620899 | orchestrator | 2026-03-11 01:04:08.620904 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2026-03-11 01:04:08.620908 | orchestrator | Wednesday 11 March 2026 01:03:34 +0000 (0:00:00.424) 0:00:26.328 ******* 2026-03-11 01:04:08.620914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.620925 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.620931 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.620937 | orchestrator | 2026-03-11 01:04:08.620943 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2026-03-11 01:04:08.620964 | orchestrator | Wednesday 11 March 2026 01:03:36 +0000 (0:00:01.553) 0:00:27.882 ******* 2026-03-11 01:04:08.620971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:04:08.620976 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:08.620985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:04:08.620991 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:08.621000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:04:08.621006 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:08.621011 | orchestrator | 2026-03-11 01:04:08.621017 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2026-03-11 01:04:08.621022 | orchestrator | Wednesday 11 March 2026 01:03:36 +0000 (0:00:00.689) 0:00:28.571 ******* 2026-03-11 01:04:08.621028 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:04:08.621038 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:08.621044 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:04:08.621049 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:08.621057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:04:08.621064 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:08.621069 | orchestrator | 2026-03-11 01:04:08.621074 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2026-03-11 01:04:08.621080 | orchestrator | Wednesday 11 March 2026 01:03:37 +0000 (0:00:00.616) 0:00:29.188 ******* 2026-03-11 01:04:08.621089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.621095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.621105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.621112 | orchestrator | 2026-03-11 01:04:08.621118 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2026-03-11 01:04:08.621123 | orchestrator | Wednesday 11 March 2026 01:03:38 +0000 (0:00:01.332) 0:00:30.521 ******* 2026-03-11 01:04:08.621132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.621139 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.621150 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.621160 | orchestrator | 2026-03-11 01:04:08.621166 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2026-03-11 01:04:08.621171 | orchestrator | Wednesday 11 March 2026 01:03:41 +0000 (0:00:02.301) 0:00:32.822 ******* 2026-03-11 01:04:08.621177 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-11 01:04:08.621195 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-11 01:04:08.621201 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2026-03-11 01:04:08.621206 | orchestrator | 2026-03-11 01:04:08.621212 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2026-03-11 01:04:08.621217 | orchestrator | Wednesday 11 March 2026 01:03:42 +0000 (0:00:01.266) 0:00:34.089 ******* 2026-03-11 01:04:08.621223 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:04:08.621228 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:08.621233 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:04:08.621239 | orchestrator | 2026-03-11 01:04:08.621245 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2026-03-11 01:04:08.621252 | orchestrator | Wednesday 11 March 2026 01:03:43 +0000 (0:00:01.236) 0:00:35.325 ******* 2026-03-11 01:04:08.621266 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:04:08.621275 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:08.621281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:04:08.621286 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:08.621297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2026-03-11 01:04:08.621307 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:08.621313 | orchestrator | 2026-03-11 01:04:08.621318 | orchestrator | TASK [placement : Check placement containers] ********************************** 2026-03-11 01:04:08.621324 | orchestrator | Wednesday 11 March 2026 01:03:44 +0000 (0:00:00.548) 0:00:35.874 ******* 2026-03-11 01:04:08.621329 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.621335 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.621341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2026-03-11 01:04:08.621347 | orchestrator | 2026-03-11 01:04:08.621353 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2026-03-11 01:04:08.621358 | orchestrator | Wednesday 11 March 2026 01:03:45 +0000 (0:00:01.056) 0:00:36.930 ******* 2026-03-11 01:04:08.621364 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:08.621370 | orchestrator | 2026-03-11 01:04:08.621375 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2026-03-11 01:04:08.621381 | orchestrator | Wednesday 11 March 2026 01:03:47 +0000 (0:00:02.663) 0:00:39.594 ******* 2026-03-11 01:04:08.621386 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:08.621396 | orchestrator | 2026-03-11 01:04:08.621401 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2026-03-11 01:04:08.621407 | orchestrator | Wednesday 11 March 2026 01:03:50 +0000 (0:00:02.430) 0:00:42.025 ******* 2026-03-11 01:04:08.621413 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:08.621419 | orchestrator | 2026-03-11 01:04:08.621423 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-11 01:04:08.621427 | orchestrator | Wednesday 11 March 2026 01:04:03 +0000 (0:00:12.663) 0:00:54.689 ******* 2026-03-11 01:04:08.621431 | orchestrator | 2026-03-11 01:04:08.621435 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-11 01:04:08.621439 | orchestrator | Wednesday 11 March 2026 01:04:03 +0000 (0:00:00.067) 0:00:54.756 ******* 2026-03-11 01:04:08.621443 | orchestrator | 2026-03-11 01:04:08.621450 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2026-03-11 01:04:08.621454 | orchestrator | Wednesday 11 March 2026 01:04:03 +0000 (0:00:00.051) 0:00:54.807 ******* 2026-03-11 01:04:08.621458 | orchestrator | 2026-03-11 01:04:08.621461 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2026-03-11 01:04:08.621465 | orchestrator | Wednesday 11 March 2026 01:04:03 +0000 (0:00:00.089) 0:00:54.897 ******* 2026-03-11 01:04:08.621469 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:08.621472 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:04:08.621476 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:04:08.621480 | orchestrator | 2026-03-11 01:04:08.621483 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:04:08.621488 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 01:04:08.621492 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 01:04:08.621496 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 01:04:08.621500 | orchestrator | 2026-03-11 01:04:08.621505 | orchestrator | 2026-03-11 01:04:08.621508 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:04:08.621513 | orchestrator | Wednesday 11 March 2026 01:04:07 +0000 (0:00:04.561) 0:00:59.458 ******* 2026-03-11 01:04:08.621517 | orchestrator | =============================================================================== 2026-03-11 01:04:08.621520 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.66s 2026-03-11 01:04:08.621526 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.23s 2026-03-11 01:04:08.621532 | orchestrator | placement : Restart placement-api container ----------------------------- 4.56s 2026-03-11 01:04:08.621538 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 3.70s 2026-03-11 01:04:08.621544 | orchestrator | service-ks-register : placement | Creating users ------------------------ 3.43s 2026-03-11 01:04:08.621549 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.40s 2026-03-11 01:04:08.621555 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.18s 2026-03-11 01:04:08.621628 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 2.97s 2026-03-11 01:04:08.621649 | orchestrator | placement : Creating placement databases -------------------------------- 2.66s 2026-03-11 01:04:08.621655 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.43s 2026-03-11 01:04:08.621661 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.30s 2026-03-11 01:04:08.621666 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.55s 2026-03-11 01:04:08.621672 | orchestrator | placement : Copying over config.json files for services ----------------- 1.33s 2026-03-11 01:04:08.621683 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.27s 2026-03-11 01:04:08.621691 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.24s 2026-03-11 01:04:08.621697 | orchestrator | placement : Check placement containers ---------------------------------- 1.06s 2026-03-11 01:04:08.621702 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.80s 2026-03-11 01:04:08.621709 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.69s 2026-03-11 01:04:08.621712 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.62s 2026-03-11 01:04:08.621716 | orchestrator | placement : Copying over existing policy file --------------------------- 0.55s 2026-03-11 01:04:08.621719 | orchestrator | 2026-03-11 01:04:08 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:04:08.621722 | orchestrator | 2026-03-11 01:04:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:11.670256 | orchestrator | 2026-03-11 01:04:11 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:11.672339 | orchestrator | 2026-03-11 01:04:11 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:11.674420 | orchestrator | 2026-03-11 01:04:11 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:11.676072 | orchestrator | 2026-03-11 01:04:11 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:04:11.676117 | orchestrator | 2026-03-11 01:04:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:14.712962 | orchestrator | 2026-03-11 01:04:14 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:14.715327 | orchestrator | 2026-03-11 01:04:14 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:14.717478 | orchestrator | 2026-03-11 01:04:14 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:14.719456 | orchestrator | 2026-03-11 01:04:14 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:04:14.719509 | orchestrator | 2026-03-11 01:04:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:17.755569 | orchestrator | 2026-03-11 01:04:17 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:17.757017 | orchestrator | 2026-03-11 01:04:17 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:17.758728 | orchestrator | 2026-03-11 01:04:17 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:17.760748 | orchestrator | 2026-03-11 01:04:17 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:04:17.760794 | orchestrator | 2026-03-11 01:04:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:20.801281 | orchestrator | 2026-03-11 01:04:20 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:20.803129 | orchestrator | 2026-03-11 01:04:20 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:20.805488 | orchestrator | 2026-03-11 01:04:20 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:20.807787 | orchestrator | 2026-03-11 01:04:20 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:04:20.807853 | orchestrator | 2026-03-11 01:04:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:23.846324 | orchestrator | 2026-03-11 01:04:23 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:23.848697 | orchestrator | 2026-03-11 01:04:23 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:23.851768 | orchestrator | 2026-03-11 01:04:23 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:23.854835 | orchestrator | 2026-03-11 01:04:23 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:04:23.856228 | orchestrator | 2026-03-11 01:04:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:26.887365 | orchestrator | 2026-03-11 01:04:26 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:26.887912 | orchestrator | 2026-03-11 01:04:26 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:26.888760 | orchestrator | 2026-03-11 01:04:26 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:26.889867 | orchestrator | 2026-03-11 01:04:26 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:04:26.889908 | orchestrator | 2026-03-11 01:04:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:29.927137 | orchestrator | 2026-03-11 01:04:29 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:29.927900 | orchestrator | 2026-03-11 01:04:29 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:29.929045 | orchestrator | 2026-03-11 01:04:29 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:29.930443 | orchestrator | 2026-03-11 01:04:29 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state STARTED 2026-03-11 01:04:29.930476 | orchestrator | 2026-03-11 01:04:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:32.971950 | orchestrator | 2026-03-11 01:04:32 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:32.974388 | orchestrator | 2026-03-11 01:04:32 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:32.975979 | orchestrator | 2026-03-11 01:04:32 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:04:32.977603 | orchestrator | 2026-03-11 01:04:32 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:32.981978 | orchestrator | 2026-03-11 01:04:32 | INFO  | Task 6cb7a72b-7b24-469d-88d6-636b58c876b8 is in state SUCCESS 2026-03-11 01:04:32.982102 | orchestrator | 2026-03-11 01:04:32.983854 | orchestrator | 2026-03-11 01:04:32.984028 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:04:32.984042 | orchestrator | 2026-03-11 01:04:32.984048 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:04:32.984054 | orchestrator | Wednesday 11 March 2026 01:00:07 +0000 (0:00:00.315) 0:00:00.315 ******* 2026-03-11 01:04:32.984060 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:04:32.984067 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:04:32.984073 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:04:32.984079 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:04:32.984084 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:04:32.984090 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:04:32.984095 | orchestrator | 2026-03-11 01:04:32.984101 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:04:32.984107 | orchestrator | Wednesday 11 March 2026 01:00:09 +0000 (0:00:01.449) 0:00:01.764 ******* 2026-03-11 01:04:32.984113 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2026-03-11 01:04:32.984118 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2026-03-11 01:04:32.984124 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2026-03-11 01:04:32.984129 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2026-03-11 01:04:32.984167 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2026-03-11 01:04:32.984173 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2026-03-11 01:04:32.984178 | orchestrator | 2026-03-11 01:04:32.984183 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2026-03-11 01:04:32.984188 | orchestrator | 2026-03-11 01:04:32.984193 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-11 01:04:32.984198 | orchestrator | Wednesday 11 March 2026 01:00:10 +0000 (0:00:01.191) 0:00:02.956 ******* 2026-03-11 01:04:32.984337 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:04:32.984460 | orchestrator | 2026-03-11 01:04:32.984469 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2026-03-11 01:04:32.984480 | orchestrator | Wednesday 11 March 2026 01:00:11 +0000 (0:00:01.332) 0:00:04.288 ******* 2026-03-11 01:04:32.984484 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:04:32.984490 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:04:32.984495 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:04:32.984500 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:04:32.984505 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:04:32.984509 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:04:32.984514 | orchestrator | 2026-03-11 01:04:32.984518 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2026-03-11 01:04:32.984523 | orchestrator | Wednesday 11 March 2026 01:00:12 +0000 (0:00:01.130) 0:00:05.419 ******* 2026-03-11 01:04:32.984527 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:04:32.984532 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:04:32.984536 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:04:32.984541 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:04:32.984545 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:04:32.984550 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:04:32.984555 | orchestrator | 2026-03-11 01:04:32.984560 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2026-03-11 01:04:32.984565 | orchestrator | Wednesday 11 March 2026 01:00:14 +0000 (0:00:01.547) 0:00:06.967 ******* 2026-03-11 01:04:32.984570 | orchestrator | ok: [testbed-node-0] => { 2026-03-11 01:04:32.984575 | orchestrator |  "changed": false, 2026-03-11 01:04:32.984580 | orchestrator |  "msg": "All assertions passed" 2026-03-11 01:04:32.984586 | orchestrator | } 2026-03-11 01:04:32.984591 | orchestrator | ok: [testbed-node-1] => { 2026-03-11 01:04:32.984595 | orchestrator |  "changed": false, 2026-03-11 01:04:32.984600 | orchestrator |  "msg": "All assertions passed" 2026-03-11 01:04:32.984605 | orchestrator | } 2026-03-11 01:04:32.984610 | orchestrator | ok: [testbed-node-2] => { 2026-03-11 01:04:32.984615 | orchestrator |  "changed": false, 2026-03-11 01:04:32.984619 | orchestrator |  "msg": "All assertions passed" 2026-03-11 01:04:32.984624 | orchestrator | } 2026-03-11 01:04:32.984629 | orchestrator | ok: [testbed-node-3] => { 2026-03-11 01:04:32.984634 | orchestrator |  "changed": false, 2026-03-11 01:04:32.984639 | orchestrator |  "msg": "All assertions passed" 2026-03-11 01:04:32.984644 | orchestrator | } 2026-03-11 01:04:32.984648 | orchestrator | ok: [testbed-node-4] => { 2026-03-11 01:04:32.984653 | orchestrator |  "changed": false, 2026-03-11 01:04:32.984658 | orchestrator |  "msg": "All assertions passed" 2026-03-11 01:04:32.984674 | orchestrator | } 2026-03-11 01:04:32.984679 | orchestrator | ok: [testbed-node-5] => { 2026-03-11 01:04:32.984684 | orchestrator |  "changed": false, 2026-03-11 01:04:32.984690 | orchestrator |  "msg": "All assertions passed" 2026-03-11 01:04:32.984694 | orchestrator | } 2026-03-11 01:04:32.984699 | orchestrator | 2026-03-11 01:04:32.984704 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2026-03-11 01:04:32.984709 | orchestrator | Wednesday 11 March 2026 01:00:15 +0000 (0:00:00.651) 0:00:07.619 ******* 2026-03-11 01:04:32.984714 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.984719 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.984734 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.984739 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.984849 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.984855 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.984861 | orchestrator | 2026-03-11 01:04:32.984868 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2026-03-11 01:04:32.984873 | orchestrator | Wednesday 11 March 2026 01:00:15 +0000 (0:00:00.536) 0:00:08.156 ******* 2026-03-11 01:04:32.984879 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2026-03-11 01:04:32.984884 | orchestrator | 2026-03-11 01:04:32.984890 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2026-03-11 01:04:32.984895 | orchestrator | Wednesday 11 March 2026 01:00:19 +0000 (0:00:03.660) 0:00:11.817 ******* 2026-03-11 01:04:32.984899 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2026-03-11 01:04:32.984904 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2026-03-11 01:04:32.984907 | orchestrator | 2026-03-11 01:04:32.984957 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2026-03-11 01:04:32.984965 | orchestrator | Wednesday 11 March 2026 01:00:26 +0000 (0:00:07.746) 0:00:19.563 ******* 2026-03-11 01:04:32.984970 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:04:32.984974 | orchestrator | 2026-03-11 01:04:32.984979 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2026-03-11 01:04:32.984984 | orchestrator | Wednesday 11 March 2026 01:00:30 +0000 (0:00:03.570) 0:00:23.133 ******* 2026-03-11 01:04:32.984989 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2026-03-11 01:04:32.984994 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:04:32.984999 | orchestrator | 2026-03-11 01:04:32.985004 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2026-03-11 01:04:32.985009 | orchestrator | Wednesday 11 March 2026 01:00:35 +0000 (0:00:04.790) 0:00:27.923 ******* 2026-03-11 01:04:32.985013 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:04:32.985019 | orchestrator | 2026-03-11 01:04:32.985024 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2026-03-11 01:04:32.985029 | orchestrator | Wednesday 11 March 2026 01:00:39 +0000 (0:00:04.173) 0:00:32.096 ******* 2026-03-11 01:04:32.985034 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2026-03-11 01:04:32.985040 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2026-03-11 01:04:32.985045 | orchestrator | 2026-03-11 01:04:32.985062 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-11 01:04:32.985067 | orchestrator | Wednesday 11 March 2026 01:00:47 +0000 (0:00:07.741) 0:00:39.838 ******* 2026-03-11 01:04:32.985073 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.985079 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.985084 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.985089 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.985092 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.985096 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.985100 | orchestrator | 2026-03-11 01:04:32.985104 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2026-03-11 01:04:32.985109 | orchestrator | Wednesday 11 March 2026 01:00:47 +0000 (0:00:00.698) 0:00:40.536 ******* 2026-03-11 01:04:32.985114 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.985119 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.985125 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.985131 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.985134 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.985138 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.985142 | orchestrator | 2026-03-11 01:04:32.985156 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2026-03-11 01:04:32.985172 | orchestrator | Wednesday 11 March 2026 01:00:50 +0000 (0:00:02.360) 0:00:42.897 ******* 2026-03-11 01:04:32.985179 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:04:32.985184 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:04:32.985190 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:04:32.985195 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:04:32.985199 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:04:32.985204 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:04:32.985210 | orchestrator | 2026-03-11 01:04:32.985215 | orchestrator | TASK [Setting sysctl values] *************************************************** 2026-03-11 01:04:32.985220 | orchestrator | Wednesday 11 March 2026 01:00:51 +0000 (0:00:00.941) 0:00:43.839 ******* 2026-03-11 01:04:32.985226 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.985231 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.985236 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.985242 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.985247 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.985253 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.985258 | orchestrator | 2026-03-11 01:04:32.985264 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2026-03-11 01:04:32.985272 | orchestrator | Wednesday 11 March 2026 01:00:54 +0000 (0:00:02.881) 0:00:46.721 ******* 2026-03-11 01:04:32.985288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.985326 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.985334 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.985347 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.985353 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.985362 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.985369 | orchestrator | 2026-03-11 01:04:32.985374 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2026-03-11 01:04:32.985379 | orchestrator | Wednesday 11 March 2026 01:00:58 +0000 (0:00:04.456) 0:00:51.178 ******* 2026-03-11 01:04:32.985385 | orchestrator | [WARNING]: Skipped 2026-03-11 01:04:32.985390 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2026-03-11 01:04:32.985397 | orchestrator | due to this access issue: 2026-03-11 01:04:32.985402 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2026-03-11 01:04:32.985408 | orchestrator | a directory 2026-03-11 01:04:32.985413 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:04:32.985419 | orchestrator | 2026-03-11 01:04:32.985424 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-11 01:04:32.985475 | orchestrator | Wednesday 11 March 2026 01:00:59 +0000 (0:00:00.865) 0:00:52.043 ******* 2026-03-11 01:04:32.985498 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:04:32.985506 | orchestrator | 2026-03-11 01:04:32.985511 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2026-03-11 01:04:32.985517 | orchestrator | Wednesday 11 March 2026 01:01:00 +0000 (0:00:01.084) 0:00:53.128 ******* 2026-03-11 01:04:32.985523 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.985535 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.985541 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.985550 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.985577 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.985585 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.985596 | orchestrator | 2026-03-11 01:04:32.985601 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2026-03-11 01:04:32.985608 | orchestrator | Wednesday 11 March 2026 01:01:03 +0000 (0:00:03.404) 0:00:56.532 ******* 2026-03-11 01:04:32.985614 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.985620 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.985630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.985633 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.985637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.985659 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.985666 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.985677 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.985682 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.985688 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.985693 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.985699 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.985704 | orchestrator | 2026-03-11 01:04:32.985709 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2026-03-11 01:04:32.985714 | orchestrator | Wednesday 11 March 2026 01:01:07 +0000 (0:00:03.545) 0:01:00.078 ******* 2026-03-11 01:04:32.985723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.985727 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.985737 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.985748 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.985756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.985761 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.985766 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.985771 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.985776 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.985782 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.985794 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.985800 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.985805 | orchestrator | 2026-03-11 01:04:32.985810 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2026-03-11 01:04:32.985820 | orchestrator | Wednesday 11 March 2026 01:01:10 +0000 (0:00:03.279) 0:01:03.358 ******* 2026-03-11 01:04:32.985825 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.985830 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.985837 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.985844 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.985849 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.985854 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.985859 | orchestrator | 2026-03-11 01:04:32.985864 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2026-03-11 01:04:32.985875 | orchestrator | Wednesday 11 March 2026 01:01:13 +0000 (0:00:02.503) 0:01:05.861 ******* 2026-03-11 01:04:32.985881 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.985886 | orchestrator | 2026-03-11 01:04:32.985891 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2026-03-11 01:04:32.985897 | orchestrator | Wednesday 11 March 2026 01:01:13 +0000 (0:00:00.123) 0:01:05.985 ******* 2026-03-11 01:04:32.985902 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.985907 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.985912 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.985917 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.985923 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.985928 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.985933 | orchestrator | 2026-03-11 01:04:32.985939 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2026-03-11 01:04:32.985944 | orchestrator | Wednesday 11 March 2026 01:01:13 +0000 (0:00:00.579) 0:01:06.564 ******* 2026-03-11 01:04:32.985950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.985956 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.985961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.985966 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.985975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.985985 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.985994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986000 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986005 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986010 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986057 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986063 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986069 | orchestrator | 2026-03-11 01:04:32.986074 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2026-03-11 01:04:32.986079 | orchestrator | Wednesday 11 March 2026 01:01:17 +0000 (0:00:03.498) 0:01:10.063 ******* 2026-03-11 01:04:32.986085 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.986100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.986111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.986117 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.986123 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.986128 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.986138 | orchestrator | 2026-03-11 01:04:32.986153 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2026-03-11 01:04:32.986159 | orchestrator | Wednesday 11 March 2026 01:01:22 +0000 (0:00:04.904) 0:01:14.968 ******* 2026-03-11 01:04:32.986168 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.986177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.986182 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.986187 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.986197 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.986205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.986210 | orchestrator | 2026-03-11 01:04:32.986215 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2026-03-11 01:04:32.986220 | orchestrator | Wednesday 11 March 2026 01:01:30 +0000 (0:00:07.883) 0:01:22.852 ******* 2026-03-11 01:04:32.986229 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986234 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.986245 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.986250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.986259 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.986267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.986272 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.986277 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986282 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986296 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986301 | orchestrator | 2026-03-11 01:04:32.986306 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2026-03-11 01:04:32.986312 | orchestrator | Wednesday 11 March 2026 01:01:34 +0000 (0:00:03.885) 0:01:26.738 ******* 2026-03-11 01:04:32.986317 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986322 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986327 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986332 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:04:32.986337 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:32.986342 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:04:32.986347 | orchestrator | 2026-03-11 01:04:32.986353 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2026-03-11 01:04:32.986358 | orchestrator | Wednesday 11 March 2026 01:01:37 +0000 (0:00:03.456) 0:01:30.194 ******* 2026-03-11 01:04:32.986377 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986383 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986388 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986394 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986399 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986403 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.986413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.986419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.986422 | orchestrator | 2026-03-11 01:04:32.986425 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2026-03-11 01:04:32.986429 | orchestrator | Wednesday 11 March 2026 01:01:41 +0000 (0:00:04.039) 0:01:34.233 ******* 2026-03-11 01:04:32.986432 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.986435 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.986438 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.986442 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986445 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986449 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986453 | orchestrator | 2026-03-11 01:04:32.986457 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2026-03-11 01:04:32.986460 | orchestrator | Wednesday 11 March 2026 01:01:44 +0000 (0:00:03.274) 0:01:37.508 ******* 2026-03-11 01:04:32.986464 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.986468 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.986471 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.986479 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986482 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986486 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986490 | orchestrator | 2026-03-11 01:04:32.986494 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2026-03-11 01:04:32.986497 | orchestrator | Wednesday 11 March 2026 01:01:47 +0000 (0:00:02.440) 0:01:39.948 ******* 2026-03-11 01:04:32.986501 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.986504 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.986508 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.986512 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986516 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986519 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986523 | orchestrator | 2026-03-11 01:04:32.986527 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2026-03-11 01:04:32.986531 | orchestrator | Wednesday 11 March 2026 01:01:50 +0000 (0:00:02.727) 0:01:42.676 ******* 2026-03-11 01:04:32.986534 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.986538 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.986542 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.986546 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986549 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986553 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986557 | orchestrator | 2026-03-11 01:04:32.986560 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2026-03-11 01:04:32.986564 | orchestrator | Wednesday 11 March 2026 01:01:52 +0000 (0:00:01.993) 0:01:44.669 ******* 2026-03-11 01:04:32.986570 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.986574 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.986578 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986581 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986587 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.986591 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986594 | orchestrator | 2026-03-11 01:04:32.986598 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2026-03-11 01:04:32.986602 | orchestrator | Wednesday 11 March 2026 01:01:54 +0000 (0:00:02.201) 0:01:46.871 ******* 2026-03-11 01:04:32.986605 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986609 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.986613 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.986616 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.986620 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986624 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986627 | orchestrator | 2026-03-11 01:04:32.986631 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2026-03-11 01:04:32.986634 | orchestrator | Wednesday 11 March 2026 01:01:57 +0000 (0:00:03.123) 0:01:49.995 ******* 2026-03-11 01:04:32.986639 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-11 01:04:32.986643 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.986648 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-11 01:04:32.986653 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.986658 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-11 01:04:32.986663 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.986667 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-11 01:04:32.986673 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986678 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-11 01:04:32.986683 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986689 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2026-03-11 01:04:32.986695 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986700 | orchestrator | 2026-03-11 01:04:32.986705 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2026-03-11 01:04:32.986710 | orchestrator | Wednesday 11 March 2026 01:01:59 +0000 (0:00:02.225) 0:01:52.220 ******* 2026-03-11 01:04:32.986715 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.986720 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.986730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.986738 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.986745 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986749 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.986753 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986756 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.986761 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986764 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986768 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986772 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986781 | orchestrator | 2026-03-11 01:04:32.986786 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2026-03-11 01:04:32.986795 | orchestrator | Wednesday 11 March 2026 01:02:01 +0000 (0:00:02.272) 0:01:54.493 ******* 2026-03-11 01:04:32.986806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.986811 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.986821 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.986827 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.986831 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.986836 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.986841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986849 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986858 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986864 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986869 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.986872 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986875 | orchestrator | 2026-03-11 01:04:32.986878 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2026-03-11 01:04:32.986882 | orchestrator | Wednesday 11 March 2026 01:02:06 +0000 (0:00:04.625) 0:01:59.118 ******* 2026-03-11 01:04:32.986887 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.986894 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.986898 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.986902 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986907 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.986912 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.986920 | orchestrator | 2026-03-11 01:04:32.986925 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2026-03-11 01:04:32.986936 | orchestrator | Wednesday 11 March 2026 01:02:09 +0000 (0:00:03.423) 0:02:02.541 ******* 2026-03-11 01:04:32.986941 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.986945 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.986950 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.986956 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:04:32.986961 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:04:32.986966 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:04:32.986971 | orchestrator | 2026-03-11 01:04:32.986976 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2026-03-11 01:04:32.986982 | orchestrator | Wednesday 11 March 2026 01:02:14 +0000 (0:00:04.202) 0:02:06.744 ******* 2026-03-11 01:04:32.986985 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.986988 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.986991 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.986994 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.986997 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.987000 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.987004 | orchestrator | 2026-03-11 01:04:32.987007 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2026-03-11 01:04:32.987010 | orchestrator | Wednesday 11 March 2026 01:02:16 +0000 (0:00:02.006) 0:02:08.750 ******* 2026-03-11 01:04:32.987013 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.987016 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.987019 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.987022 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.987028 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.987031 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.987034 | orchestrator | 2026-03-11 01:04:32.987037 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2026-03-11 01:04:32.987041 | orchestrator | Wednesday 11 March 2026 01:02:18 +0000 (0:00:02.394) 0:02:11.145 ******* 2026-03-11 01:04:32.987044 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.987047 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.987050 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.987053 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.987056 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.987059 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.987062 | orchestrator | 2026-03-11 01:04:32.987065 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2026-03-11 01:04:32.987068 | orchestrator | Wednesday 11 March 2026 01:02:21 +0000 (0:00:02.481) 0:02:13.626 ******* 2026-03-11 01:04:32.987071 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.987074 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.987077 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.987080 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.987083 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.987086 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.987092 | orchestrator | 2026-03-11 01:04:32.987097 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2026-03-11 01:04:32.987101 | orchestrator | Wednesday 11 March 2026 01:02:23 +0000 (0:00:01.985) 0:02:15.612 ******* 2026-03-11 01:04:32.987106 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.987112 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.987117 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.987122 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.987127 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.987132 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.987218 | orchestrator | 2026-03-11 01:04:32.987226 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2026-03-11 01:04:32.987230 | orchestrator | Wednesday 11 March 2026 01:02:25 +0000 (0:00:02.704) 0:02:18.316 ******* 2026-03-11 01:04:32.987233 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.987236 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.987239 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.987242 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.987245 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.987252 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.987258 | orchestrator | 2026-03-11 01:04:32.987264 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2026-03-11 01:04:32.987271 | orchestrator | Wednesday 11 March 2026 01:02:28 +0000 (0:00:02.510) 0:02:20.827 ******* 2026-03-11 01:04:32.987276 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.987281 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.987285 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.987291 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.987296 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.987304 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.987309 | orchestrator | 2026-03-11 01:04:32.987314 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2026-03-11 01:04:32.987319 | orchestrator | Wednesday 11 March 2026 01:02:30 +0000 (0:00:02.532) 0:02:23.359 ******* 2026-03-11 01:04:32.987325 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-11 01:04:32.987331 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.987336 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-11 01:04:32.987341 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.987351 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-11 01:04:32.987357 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-11 01:04:32.987362 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.987367 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.987377 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-11 01:04:32.987383 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.987388 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2026-03-11 01:04:32.987393 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.987398 | orchestrator | 2026-03-11 01:04:32.987402 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2026-03-11 01:04:32.987407 | orchestrator | Wednesday 11 March 2026 01:02:32 +0000 (0:00:02.155) 0:02:25.515 ******* 2026-03-11 01:04:32.987412 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.987418 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.987424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.987429 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.987437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.987443 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.987448 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2026-03-11 01:04:32.987457 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.987466 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.987471 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.987476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2026-03-11 01:04:32.987481 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.987486 | orchestrator | 2026-03-11 01:04:32.987491 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2026-03-11 01:04:32.987496 | orchestrator | Wednesday 11 March 2026 01:02:36 +0000 (0:00:03.656) 0:02:29.171 ******* 2026-03-11 01:04:32.987501 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.987510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.987522 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.987528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2026-03-11 01:04:32.987533 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.987538 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2026-03-11 01:04:32.987543 | orchestrator | 2026-03-11 01:04:32.987549 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2026-03-11 01:04:32.987553 | orchestrator | Wednesday 11 March 2026 01:02:41 +0000 (0:00:04.933) 0:02:34.105 ******* 2026-03-11 01:04:32.987558 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:04:32.987562 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:04:32.987567 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:04:32.987578 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:04:32.987583 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:04:32.987588 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:04:32.987592 | orchestrator | 2026-03-11 01:04:32.987597 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2026-03-11 01:04:32.987605 | orchestrator | Wednesday 11 March 2026 01:02:42 +0000 (0:00:00.603) 0:02:34.708 ******* 2026-03-11 01:04:32.987611 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:32.987616 | orchestrator | 2026-03-11 01:04:32.987621 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2026-03-11 01:04:32.987626 | orchestrator | Wednesday 11 March 2026 01:02:44 +0000 (0:00:02.325) 0:02:37.034 ******* 2026-03-11 01:04:32.987631 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:32.987636 | orchestrator | 2026-03-11 01:04:32.987641 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2026-03-11 01:04:32.987645 | orchestrator | Wednesday 11 March 2026 01:02:46 +0000 (0:00:02.370) 0:02:39.404 ******* 2026-03-11 01:04:32.987650 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:32.987656 | orchestrator | 2026-03-11 01:04:32.987661 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-11 01:04:32.987666 | orchestrator | Wednesday 11 March 2026 01:03:26 +0000 (0:00:39.199) 0:03:18.603 ******* 2026-03-11 01:04:32.987671 | orchestrator | 2026-03-11 01:04:32.987675 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-11 01:04:32.987680 | orchestrator | Wednesday 11 March 2026 01:03:26 +0000 (0:00:00.064) 0:03:18.668 ******* 2026-03-11 01:04:32.987684 | orchestrator | 2026-03-11 01:04:32.987689 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-11 01:04:32.987693 | orchestrator | Wednesday 11 March 2026 01:03:26 +0000 (0:00:00.254) 0:03:18.922 ******* 2026-03-11 01:04:32.987698 | orchestrator | 2026-03-11 01:04:32.987703 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-11 01:04:32.987709 | orchestrator | Wednesday 11 March 2026 01:03:26 +0000 (0:00:00.065) 0:03:18.988 ******* 2026-03-11 01:04:32.987714 | orchestrator | 2026-03-11 01:04:32.987722 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-11 01:04:32.987726 | orchestrator | Wednesday 11 March 2026 01:03:26 +0000 (0:00:00.063) 0:03:19.051 ******* 2026-03-11 01:04:32.987732 | orchestrator | 2026-03-11 01:04:32.987763 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2026-03-11 01:04:32.987767 | orchestrator | Wednesday 11 March 2026 01:03:26 +0000 (0:00:00.064) 0:03:19.115 ******* 2026-03-11 01:04:32.987772 | orchestrator | 2026-03-11 01:04:32.987778 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2026-03-11 01:04:32.987782 | orchestrator | Wednesday 11 March 2026 01:03:26 +0000 (0:00:00.066) 0:03:19.182 ******* 2026-03-11 01:04:32.987787 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:04:32.987792 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:04:32.987797 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:04:32.987803 | orchestrator | 2026-03-11 01:04:32.987808 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2026-03-11 01:04:32.987813 | orchestrator | Wednesday 11 March 2026 01:03:47 +0000 (0:00:20.910) 0:03:40.092 ******* 2026-03-11 01:04:32.987818 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:04:32.987823 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:04:32.987828 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:04:32.987834 | orchestrator | 2026-03-11 01:04:32.987839 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:04:32.987844 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 01:04:32.987850 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-11 01:04:32.987860 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2026-03-11 01:04:32.987865 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 01:04:32.987871 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 01:04:32.987876 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2026-03-11 01:04:32.987881 | orchestrator | 2026-03-11 01:04:32.987886 | orchestrator | 2026-03-11 01:04:32.987891 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:04:32.987896 | orchestrator | Wednesday 11 March 2026 01:04:31 +0000 (0:00:43.696) 0:04:23.789 ******* 2026-03-11 01:04:32.987902 | orchestrator | =============================================================================== 2026-03-11 01:04:32.987907 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 43.70s 2026-03-11 01:04:32.987911 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.20s 2026-03-11 01:04:32.987916 | orchestrator | neutron : Restart neutron-server container ----------------------------- 20.91s 2026-03-11 01:04:32.987920 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.88s 2026-03-11 01:04:32.987925 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.75s 2026-03-11 01:04:32.987930 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 7.74s 2026-03-11 01:04:32.987935 | orchestrator | neutron : Check neutron containers -------------------------------------- 4.93s 2026-03-11 01:04:32.987940 | orchestrator | neutron : Copying over config.json files for services ------------------- 4.91s 2026-03-11 01:04:32.987945 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.79s 2026-03-11 01:04:32.987953 | orchestrator | neutron : Copying over fwaas_driver.ini --------------------------------- 4.63s 2026-03-11 01:04:32.987958 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 4.46s 2026-03-11 01:04:32.987964 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 4.20s 2026-03-11 01:04:32.987969 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 4.17s 2026-03-11 01:04:32.987973 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 4.04s 2026-03-11 01:04:32.987978 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 3.89s 2026-03-11 01:04:32.987983 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.66s 2026-03-11 01:04:32.987988 | orchestrator | neutron : Copying over neutron_taas.conf -------------------------------- 3.66s 2026-03-11 01:04:32.987993 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.57s 2026-03-11 01:04:32.987997 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.55s 2026-03-11 01:04:32.988003 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.50s 2026-03-11 01:04:32.988007 | orchestrator | 2026-03-11 01:04:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:36.027519 | orchestrator | 2026-03-11 01:04:36 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:36.028666 | orchestrator | 2026-03-11 01:04:36 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:36.030167 | orchestrator | 2026-03-11 01:04:36 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:04:36.031916 | orchestrator | 2026-03-11 01:04:36 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:36.031982 | orchestrator | 2026-03-11 01:04:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:39.091519 | orchestrator | 2026-03-11 01:04:39 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:39.091573 | orchestrator | 2026-03-11 01:04:39 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:39.091850 | orchestrator | 2026-03-11 01:04:39 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:04:39.092703 | orchestrator | 2026-03-11 01:04:39 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:39.092718 | orchestrator | 2026-03-11 01:04:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:42.118477 | orchestrator | 2026-03-11 01:04:42 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:42.118778 | orchestrator | 2026-03-11 01:04:42 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:42.119434 | orchestrator | 2026-03-11 01:04:42 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:04:42.120026 | orchestrator | 2026-03-11 01:04:42 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:42.120046 | orchestrator | 2026-03-11 01:04:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:45.145948 | orchestrator | 2026-03-11 01:04:45 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:45.146010 | orchestrator | 2026-03-11 01:04:45 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:45.146744 | orchestrator | 2026-03-11 01:04:45 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:04:45.147527 | orchestrator | 2026-03-11 01:04:45 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:45.147544 | orchestrator | 2026-03-11 01:04:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:48.174989 | orchestrator | 2026-03-11 01:04:48 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:48.175251 | orchestrator | 2026-03-11 01:04:48 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:48.176475 | orchestrator | 2026-03-11 01:04:48 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:04:48.176928 | orchestrator | 2026-03-11 01:04:48 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:48.176951 | orchestrator | 2026-03-11 01:04:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:51.199770 | orchestrator | 2026-03-11 01:04:51 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:51.199906 | orchestrator | 2026-03-11 01:04:51 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:51.200603 | orchestrator | 2026-03-11 01:04:51 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:04:51.201287 | orchestrator | 2026-03-11 01:04:51 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:51.201321 | orchestrator | 2026-03-11 01:04:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:54.222221 | orchestrator | 2026-03-11 01:04:54 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:54.224239 | orchestrator | 2026-03-11 01:04:54 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:54.224780 | orchestrator | 2026-03-11 01:04:54 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:04:54.225104 | orchestrator | 2026-03-11 01:04:54 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:54.225142 | orchestrator | 2026-03-11 01:04:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:04:57.250762 | orchestrator | 2026-03-11 01:04:57 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:04:57.251254 | orchestrator | 2026-03-11 01:04:57 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:04:57.251840 | orchestrator | 2026-03-11 01:04:57 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:04:57.252460 | orchestrator | 2026-03-11 01:04:57 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:04:57.253958 | orchestrator | 2026-03-11 01:04:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:00.284967 | orchestrator | 2026-03-11 01:05:00 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:00.287986 | orchestrator | 2026-03-11 01:05:00 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state STARTED 2026-03-11 01:05:00.289994 | orchestrator | 2026-03-11 01:05:00 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:00.291968 | orchestrator | 2026-03-11 01:05:00 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:00.292178 | orchestrator | 2026-03-11 01:05:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:03.323529 | orchestrator | 2026-03-11 01:05:03 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:03.324591 | orchestrator | 2026-03-11 01:05:03 | INFO  | Task e7296873-9a96-4463-948d-8fcf77571a1f is in state SUCCESS 2026-03-11 01:05:03.326092 | orchestrator | 2026-03-11 01:05:03.326160 | orchestrator | 2026-03-11 01:05:03.326167 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:05:03.326172 | orchestrator | 2026-03-11 01:05:03.326177 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:05:03.326182 | orchestrator | Wednesday 11 March 2026 01:03:24 +0000 (0:00:00.259) 0:00:00.259 ******* 2026-03-11 01:05:03.326187 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:05:03.326192 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:05:03.326197 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:05:03.326201 | orchestrator | 2026-03-11 01:05:03.326206 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:05:03.326211 | orchestrator | Wednesday 11 March 2026 01:03:25 +0000 (0:00:00.302) 0:00:00.562 ******* 2026-03-11 01:05:03.326215 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2026-03-11 01:05:03.326221 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2026-03-11 01:05:03.326229 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2026-03-11 01:05:03.326236 | orchestrator | 2026-03-11 01:05:03.326244 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2026-03-11 01:05:03.326252 | orchestrator | 2026-03-11 01:05:03.326259 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-11 01:05:03.326267 | orchestrator | Wednesday 11 March 2026 01:03:25 +0000 (0:00:00.418) 0:00:00.980 ******* 2026-03-11 01:05:03.326275 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:05:03.326284 | orchestrator | 2026-03-11 01:05:03.326290 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2026-03-11 01:05:03.326294 | orchestrator | Wednesday 11 March 2026 01:03:26 +0000 (0:00:00.575) 0:00:01.556 ******* 2026-03-11 01:05:03.326299 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2026-03-11 01:05:03.326304 | orchestrator | 2026-03-11 01:05:03.326323 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2026-03-11 01:05:03.326328 | orchestrator | Wednesday 11 March 2026 01:03:29 +0000 (0:00:03.036) 0:00:04.593 ******* 2026-03-11 01:05:03.326333 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2026-03-11 01:05:03.326338 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2026-03-11 01:05:03.326342 | orchestrator | 2026-03-11 01:05:03.326347 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2026-03-11 01:05:03.326351 | orchestrator | Wednesday 11 March 2026 01:03:36 +0000 (0:00:07.015) 0:00:11.609 ******* 2026-03-11 01:05:03.326363 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:05:03.326368 | orchestrator | 2026-03-11 01:05:03.326372 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2026-03-11 01:05:03.326377 | orchestrator | Wednesday 11 March 2026 01:03:39 +0000 (0:00:03.534) 0:00:15.143 ******* 2026-03-11 01:05:03.326382 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2026-03-11 01:05:03.326529 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:05:03.326537 | orchestrator | 2026-03-11 01:05:03.326542 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2026-03-11 01:05:03.326546 | orchestrator | Wednesday 11 March 2026 01:03:43 +0000 (0:00:03.959) 0:00:19.103 ******* 2026-03-11 01:05:03.326551 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:05:03.326556 | orchestrator | 2026-03-11 01:05:03.326560 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2026-03-11 01:05:03.326565 | orchestrator | Wednesday 11 March 2026 01:03:47 +0000 (0:00:03.396) 0:00:22.500 ******* 2026-03-11 01:05:03.326570 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2026-03-11 01:05:03.326574 | orchestrator | 2026-03-11 01:05:03.326579 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2026-03-11 01:05:03.326583 | orchestrator | Wednesday 11 March 2026 01:03:50 +0000 (0:00:03.522) 0:00:26.022 ******* 2026-03-11 01:05:03.326588 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:05:03.326593 | orchestrator | 2026-03-11 01:05:03.326597 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2026-03-11 01:05:03.326602 | orchestrator | Wednesday 11 March 2026 01:03:53 +0000 (0:00:02.731) 0:00:28.754 ******* 2026-03-11 01:05:03.326607 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:05:03.326611 | orchestrator | 2026-03-11 01:05:03.326616 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2026-03-11 01:05:03.326620 | orchestrator | Wednesday 11 March 2026 01:03:57 +0000 (0:00:03.572) 0:00:32.327 ******* 2026-03-11 01:05:03.326625 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:05:03.326629 | orchestrator | 2026-03-11 01:05:03.326634 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2026-03-11 01:05:03.326638 | orchestrator | Wednesday 11 March 2026 01:04:00 +0000 (0:00:03.396) 0:00:35.724 ******* 2026-03-11 01:05:03.326655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.326665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.326685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.326694 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.326702 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.326716 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.326735 | orchestrator | 2026-03-11 01:05:03.326743 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2026-03-11 01:05:03.326751 | orchestrator | Wednesday 11 March 2026 01:04:01 +0000 (0:00:01.288) 0:00:37.013 ******* 2026-03-11 01:05:03.326759 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:05:03.326766 | orchestrator | 2026-03-11 01:05:03.326775 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2026-03-11 01:05:03.326782 | orchestrator | Wednesday 11 March 2026 01:04:01 +0000 (0:00:00.112) 0:00:37.125 ******* 2026-03-11 01:05:03.326789 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:05:03.326795 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:05:03.326800 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:05:03.326804 | orchestrator | 2026-03-11 01:05:03.326809 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2026-03-11 01:05:03.326814 | orchestrator | Wednesday 11 March 2026 01:04:02 +0000 (0:00:00.365) 0:00:37.491 ******* 2026-03-11 01:05:03.326818 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:05:03.326823 | orchestrator | 2026-03-11 01:05:03.326839 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2026-03-11 01:05:03.326844 | orchestrator | Wednesday 11 March 2026 01:04:03 +0000 (0:00:00.843) 0:00:38.335 ******* 2026-03-11 01:05:03.326852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.326858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.326863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.326882 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.326890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.326898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.326904 | orchestrator | 2026-03-11 01:05:03.326911 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2026-03-11 01:05:03.326920 | orchestrator | Wednesday 11 March 2026 01:04:05 +0000 (0:00:02.238) 0:00:40.574 ******* 2026-03-11 01:05:03.326927 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:05:03.326934 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:05:03.326940 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:05:03.326947 | orchestrator | 2026-03-11 01:05:03.326953 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-11 01:05:03.326960 | orchestrator | Wednesday 11 March 2026 01:04:05 +0000 (0:00:00.326) 0:00:40.900 ******* 2026-03-11 01:05:03.326967 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:05:03.326973 | orchestrator | 2026-03-11 01:05:03.326980 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2026-03-11 01:05:03.326986 | orchestrator | Wednesday 11 March 2026 01:04:06 +0000 (0:00:00.584) 0:00:41.485 ******* 2026-03-11 01:05:03.326992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.327009 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.327017 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.327026 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.327038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.327046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.327059 | orchestrator | 2026-03-11 01:05:03.327067 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2026-03-11 01:05:03.327075 | orchestrator | Wednesday 11 March 2026 01:04:08 +0000 (0:00:02.013) 0:00:43.498 ******* 2026-03-11 01:05:03.327088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:05:03.327096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:05:03.327119 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:05:03.327131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:05:03.327140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:05:03.327148 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:05:03.327162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:05:03.327175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:05:03.327183 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:05:03.327190 | orchestrator | 2026-03-11 01:05:03.327198 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2026-03-11 01:05:03.327206 | orchestrator | Wednesday 11 March 2026 01:04:08 +0000 (0:00:00.629) 0:00:44.127 ******* 2026-03-11 01:05:03.327215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:05:03.327231 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:05:03.327239 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:05:03.327247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:05:03.327266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:05:03.327274 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:05:03.327290 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:05:03.327299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:05:03.327308 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:05:03.327317 | orchestrator | 2026-03-11 01:05:03.327325 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2026-03-11 01:05:03.327333 | orchestrator | Wednesday 11 March 2026 01:04:09 +0000 (0:00:00.987) 0:00:45.115 ******* 2026-03-11 01:05:03.327346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.327361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.327375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.327384 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.327392 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.327405 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.327418 | orchestrator | 2026-03-11 01:05:03.327426 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2026-03-11 01:05:03.327434 | orchestrator | Wednesday 11 March 2026 01:04:11 +0000 (0:00:01.946) 0:00:47.061 ******* 2026-03-11 01:05:03.327442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.327454 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.327540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.327548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.327564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.327573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.327581 | orchestrator | 2026-03-11 01:05:03.327589 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2026-03-11 01:05:03.327597 | orchestrator | Wednesday 11 March 2026 01:04:15 +0000 (0:00:04.243) 0:00:51.304 ******* 2026-03-11 01:05:03.327611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:05:03.327619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:05:03.327627 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:05:03.327640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:05:03.327655 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:05:03.327663 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:05:03.327672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2026-03-11 01:05:03.327685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:05:03.327694 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:05:03.327703 | orchestrator | 2026-03-11 01:05:03.327711 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2026-03-11 01:05:03.327719 | orchestrator | Wednesday 11 March 2026 01:04:16 +0000 (0:00:00.543) 0:00:51.848 ******* 2026-03-11 01:05:03.327728 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.327745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.327754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2026-03-11 01:05:03.327763 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.327776 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.327784 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:05:03.327797 | orchestrator | 2026-03-11 01:05:03.327806 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2026-03-11 01:05:03.327814 | orchestrator | Wednesday 11 March 2026 01:04:18 +0000 (0:00:02.037) 0:00:53.886 ******* 2026-03-11 01:05:03.327896 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:05:03.327904 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:05:03.327912 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:05:03.327920 | orchestrator | 2026-03-11 01:05:03.327928 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2026-03-11 01:05:03.327937 | orchestrator | Wednesday 11 March 2026 01:04:18 +0000 (0:00:00.262) 0:00:54.148 ******* 2026-03-11 01:05:03.327945 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:05:03.327953 | orchestrator | 2026-03-11 01:05:03.327961 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2026-03-11 01:05:03.327974 | orchestrator | Wednesday 11 March 2026 01:04:21 +0000 (0:00:02.389) 0:00:56.538 ******* 2026-03-11 01:05:03.327983 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:05:03.327990 | orchestrator | 2026-03-11 01:05:03.327998 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2026-03-11 01:05:03.328006 | orchestrator | Wednesday 11 March 2026 01:04:23 +0000 (0:00:02.551) 0:00:59.089 ******* 2026-03-11 01:05:03.328014 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:05:03.328022 | orchestrator | 2026-03-11 01:05:03.328030 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-11 01:05:03.328039 | orchestrator | Wednesday 11 March 2026 01:04:39 +0000 (0:00:15.679) 0:01:14.768 ******* 2026-03-11 01:05:03.328047 | orchestrator | 2026-03-11 01:05:03.328055 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-11 01:05:03.328063 | orchestrator | Wednesday 11 March 2026 01:04:39 +0000 (0:00:00.064) 0:01:14.833 ******* 2026-03-11 01:05:03.328071 | orchestrator | 2026-03-11 01:05:03.328078 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2026-03-11 01:05:03.328086 | orchestrator | Wednesday 11 March 2026 01:04:39 +0000 (0:00:00.057) 0:01:14.891 ******* 2026-03-11 01:05:03.328094 | orchestrator | 2026-03-11 01:05:03.328170 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2026-03-11 01:05:03.328180 | orchestrator | Wednesday 11 March 2026 01:04:39 +0000 (0:00:00.062) 0:01:14.953 ******* 2026-03-11 01:05:03.328189 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:05:03.328197 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:05:03.328206 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:05:03.328214 | orchestrator | 2026-03-11 01:05:03.328221 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2026-03-11 01:05:03.328229 | orchestrator | Wednesday 11 March 2026 01:04:52 +0000 (0:00:12.815) 0:01:27.768 ******* 2026-03-11 01:05:03.328236 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:05:03.328244 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:05:03.328252 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:05:03.328259 | orchestrator | 2026-03-11 01:05:03.328267 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:05:03.328276 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2026-03-11 01:05:03.328285 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 01:05:03.328293 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 01:05:03.328300 | orchestrator | 2026-03-11 01:05:03.328308 | orchestrator | 2026-03-11 01:05:03.328315 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:05:03.328323 | orchestrator | Wednesday 11 March 2026 01:05:01 +0000 (0:00:09.073) 0:01:36.842 ******* 2026-03-11 01:05:03.328339 | orchestrator | =============================================================================== 2026-03-11 01:05:03.328347 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.68s 2026-03-11 01:05:03.328364 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 12.82s 2026-03-11 01:05:03.328373 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.07s 2026-03-11 01:05:03.328381 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.01s 2026-03-11 01:05:03.328389 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.24s 2026-03-11 01:05:03.328397 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.96s 2026-03-11 01:05:03.328406 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.57s 2026-03-11 01:05:03.328414 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.53s 2026-03-11 01:05:03.328422 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 3.52s 2026-03-11 01:05:03.328430 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.40s 2026-03-11 01:05:03.328438 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.40s 2026-03-11 01:05:03.328446 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.04s 2026-03-11 01:05:03.328455 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 2.73s 2026-03-11 01:05:03.328463 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.55s 2026-03-11 01:05:03.328471 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.39s 2026-03-11 01:05:03.328479 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.24s 2026-03-11 01:05:03.328487 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.04s 2026-03-11 01:05:03.328495 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.01s 2026-03-11 01:05:03.328504 | orchestrator | magnum : Copying over config.json files for services -------------------- 1.95s 2026-03-11 01:05:03.328512 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.29s 2026-03-11 01:05:03.328521 | orchestrator | 2026-03-11 01:05:03 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:03.328530 | orchestrator | 2026-03-11 01:05:03 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:03.328539 | orchestrator | 2026-03-11 01:05:03 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:03.328557 | orchestrator | 2026-03-11 01:05:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:06.352357 | orchestrator | 2026-03-11 01:05:06 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:06.353294 | orchestrator | 2026-03-11 01:05:06 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:06.353722 | orchestrator | 2026-03-11 01:05:06 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:06.354499 | orchestrator | 2026-03-11 01:05:06 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:06.354543 | orchestrator | 2026-03-11 01:05:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:09.381014 | orchestrator | 2026-03-11 01:05:09 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:09.381066 | orchestrator | 2026-03-11 01:05:09 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:09.382267 | orchestrator | 2026-03-11 01:05:09 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:09.382959 | orchestrator | 2026-03-11 01:05:09 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:09.383057 | orchestrator | 2026-03-11 01:05:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:12.407972 | orchestrator | 2026-03-11 01:05:12 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:12.408045 | orchestrator | 2026-03-11 01:05:12 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:12.408052 | orchestrator | 2026-03-11 01:05:12 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:12.408456 | orchestrator | 2026-03-11 01:05:12 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:12.408539 | orchestrator | 2026-03-11 01:05:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:15.430455 | orchestrator | 2026-03-11 01:05:15 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:15.433821 | orchestrator | 2026-03-11 01:05:15 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:15.433875 | orchestrator | 2026-03-11 01:05:15 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:15.434657 | orchestrator | 2026-03-11 01:05:15 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:15.434695 | orchestrator | 2026-03-11 01:05:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:18.458959 | orchestrator | 2026-03-11 01:05:18 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:18.459638 | orchestrator | 2026-03-11 01:05:18 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:18.460324 | orchestrator | 2026-03-11 01:05:18 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:18.461326 | orchestrator | 2026-03-11 01:05:18 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:18.461361 | orchestrator | 2026-03-11 01:05:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:21.482521 | orchestrator | 2026-03-11 01:05:21 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:21.482981 | orchestrator | 2026-03-11 01:05:21 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:21.483709 | orchestrator | 2026-03-11 01:05:21 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:21.485014 | orchestrator | 2026-03-11 01:05:21 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:21.485062 | orchestrator | 2026-03-11 01:05:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:24.505808 | orchestrator | 2026-03-11 01:05:24 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:24.506448 | orchestrator | 2026-03-11 01:05:24 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:24.508778 | orchestrator | 2026-03-11 01:05:24 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:24.509482 | orchestrator | 2026-03-11 01:05:24 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:24.509639 | orchestrator | 2026-03-11 01:05:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:27.539972 | orchestrator | 2026-03-11 01:05:27 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:27.542734 | orchestrator | 2026-03-11 01:05:27 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:27.543801 | orchestrator | 2026-03-11 01:05:27 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:27.548555 | orchestrator | 2026-03-11 01:05:27 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:27.548653 | orchestrator | 2026-03-11 01:05:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:30.588256 | orchestrator | 2026-03-11 01:05:30 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:30.590298 | orchestrator | 2026-03-11 01:05:30 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:30.591628 | orchestrator | 2026-03-11 01:05:30 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:30.593365 | orchestrator | 2026-03-11 01:05:30 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:30.593417 | orchestrator | 2026-03-11 01:05:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:33.640741 | orchestrator | 2026-03-11 01:05:33 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:33.642216 | orchestrator | 2026-03-11 01:05:33 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:33.645438 | orchestrator | 2026-03-11 01:05:33 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:33.646913 | orchestrator | 2026-03-11 01:05:33 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:33.646952 | orchestrator | 2026-03-11 01:05:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:36.679806 | orchestrator | 2026-03-11 01:05:36 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:36.680810 | orchestrator | 2026-03-11 01:05:36 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:36.681443 | orchestrator | 2026-03-11 01:05:36 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:36.683434 | orchestrator | 2026-03-11 01:05:36 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:36.683483 | orchestrator | 2026-03-11 01:05:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:39.725931 | orchestrator | 2026-03-11 01:05:39 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:39.727912 | orchestrator | 2026-03-11 01:05:39 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:39.730132 | orchestrator | 2026-03-11 01:05:39 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:39.732246 | orchestrator | 2026-03-11 01:05:39 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:39.732304 | orchestrator | 2026-03-11 01:05:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:42.766983 | orchestrator | 2026-03-11 01:05:42 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:42.768688 | orchestrator | 2026-03-11 01:05:42 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:42.770550 | orchestrator | 2026-03-11 01:05:42 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:42.772012 | orchestrator | 2026-03-11 01:05:42 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:42.772087 | orchestrator | 2026-03-11 01:05:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:45.810400 | orchestrator | 2026-03-11 01:05:45 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:45.811157 | orchestrator | 2026-03-11 01:05:45 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:45.812349 | orchestrator | 2026-03-11 01:05:45 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:45.813617 | orchestrator | 2026-03-11 01:05:45 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:45.813650 | orchestrator | 2026-03-11 01:05:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:48.842961 | orchestrator | 2026-03-11 01:05:48 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:48.843954 | orchestrator | 2026-03-11 01:05:48 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:48.845395 | orchestrator | 2026-03-11 01:05:48 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:48.847285 | orchestrator | 2026-03-11 01:05:48 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:48.847375 | orchestrator | 2026-03-11 01:05:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:51.892679 | orchestrator | 2026-03-11 01:05:51 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:51.894708 | orchestrator | 2026-03-11 01:05:51 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:51.894797 | orchestrator | 2026-03-11 01:05:51 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:51.895759 | orchestrator | 2026-03-11 01:05:51 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:51.896203 | orchestrator | 2026-03-11 01:05:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:54.940381 | orchestrator | 2026-03-11 01:05:54 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:54.942771 | orchestrator | 2026-03-11 01:05:54 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:54.942827 | orchestrator | 2026-03-11 01:05:54 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:54.944328 | orchestrator | 2026-03-11 01:05:54 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:54.944376 | orchestrator | 2026-03-11 01:05:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:05:57.980919 | orchestrator | 2026-03-11 01:05:57 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:05:57.981761 | orchestrator | 2026-03-11 01:05:57 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:05:57.983132 | orchestrator | 2026-03-11 01:05:57 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:05:57.984541 | orchestrator | 2026-03-11 01:05:57 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:05:57.984617 | orchestrator | 2026-03-11 01:05:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:01.027316 | orchestrator | 2026-03-11 01:06:01 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:01.028135 | orchestrator | 2026-03-11 01:06:01 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:01.030466 | orchestrator | 2026-03-11 01:06:01 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:06:01.032001 | orchestrator | 2026-03-11 01:06:01 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:01.032215 | orchestrator | 2026-03-11 01:06:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:04.073394 | orchestrator | 2026-03-11 01:06:04 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:04.074133 | orchestrator | 2026-03-11 01:06:04 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:04.075211 | orchestrator | 2026-03-11 01:06:04 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:06:04.076218 | orchestrator | 2026-03-11 01:06:04 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:04.076255 | orchestrator | 2026-03-11 01:06:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:07.119227 | orchestrator | 2026-03-11 01:06:07 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:07.121534 | orchestrator | 2026-03-11 01:06:07 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:07.123819 | orchestrator | 2026-03-11 01:06:07 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:06:07.124653 | orchestrator | 2026-03-11 01:06:07 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:07.124971 | orchestrator | 2026-03-11 01:06:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:10.172421 | orchestrator | 2026-03-11 01:06:10 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:10.172569 | orchestrator | 2026-03-11 01:06:10 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:10.172584 | orchestrator | 2026-03-11 01:06:10 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:06:10.172588 | orchestrator | 2026-03-11 01:06:10 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:10.172712 | orchestrator | 2026-03-11 01:06:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:13.200962 | orchestrator | 2026-03-11 01:06:13 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:13.201331 | orchestrator | 2026-03-11 01:06:13 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:13.202091 | orchestrator | 2026-03-11 01:06:13 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:06:13.202633 | orchestrator | 2026-03-11 01:06:13 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:13.202807 | orchestrator | 2026-03-11 01:06:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:16.244194 | orchestrator | 2026-03-11 01:06:16 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:16.245045 | orchestrator | 2026-03-11 01:06:16 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:16.245932 | orchestrator | 2026-03-11 01:06:16 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:06:16.247106 | orchestrator | 2026-03-11 01:06:16 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:16.247148 | orchestrator | 2026-03-11 01:06:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:19.267977 | orchestrator | 2026-03-11 01:06:19 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:19.268566 | orchestrator | 2026-03-11 01:06:19 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:19.268847 | orchestrator | 2026-03-11 01:06:19 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:06:19.269350 | orchestrator | 2026-03-11 01:06:19 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:19.269377 | orchestrator | 2026-03-11 01:06:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:22.295209 | orchestrator | 2026-03-11 01:06:22 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:22.296254 | orchestrator | 2026-03-11 01:06:22 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:22.297425 | orchestrator | 2026-03-11 01:06:22 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:06:22.298738 | orchestrator | 2026-03-11 01:06:22 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:22.298775 | orchestrator | 2026-03-11 01:06:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:25.334348 | orchestrator | 2026-03-11 01:06:25 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:25.336509 | orchestrator | 2026-03-11 01:06:25 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:25.340091 | orchestrator | 2026-03-11 01:06:25 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:06:25.342160 | orchestrator | 2026-03-11 01:06:25 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:25.342569 | orchestrator | 2026-03-11 01:06:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:28.396792 | orchestrator | 2026-03-11 01:06:28 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:28.398404 | orchestrator | 2026-03-11 01:06:28 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:28.398442 | orchestrator | 2026-03-11 01:06:28 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:06:28.399785 | orchestrator | 2026-03-11 01:06:28 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:28.399819 | orchestrator | 2026-03-11 01:06:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:31.447127 | orchestrator | 2026-03-11 01:06:31 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:31.449317 | orchestrator | 2026-03-11 01:06:31 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:31.450958 | orchestrator | 2026-03-11 01:06:31 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:06:31.452485 | orchestrator | 2026-03-11 01:06:31 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:31.452527 | orchestrator | 2026-03-11 01:06:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:34.488906 | orchestrator | 2026-03-11 01:06:34 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:34.490135 | orchestrator | 2026-03-11 01:06:34 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:34.490224 | orchestrator | 2026-03-11 01:06:34 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state STARTED 2026-03-11 01:06:34.492197 | orchestrator | 2026-03-11 01:06:34 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:34.492233 | orchestrator | 2026-03-11 01:06:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:37.540569 | orchestrator | 2026-03-11 01:06:37 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:37.542060 | orchestrator | 2026-03-11 01:06:37 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:06:37.543734 | orchestrator | 2026-03-11 01:06:37 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:37.545927 | orchestrator | 2026-03-11 01:06:37 | INFO  | Task 76c8b74a-43be-486b-a8e9-d49afbf39a76 is in state SUCCESS 2026-03-11 01:06:37.547276 | orchestrator | 2026-03-11 01:06:37.547318 | orchestrator | 2026-03-11 01:06:37.547323 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:06:37.547327 | orchestrator | 2026-03-11 01:06:37.547330 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:06:37.547334 | orchestrator | Wednesday 11 March 2026 01:03:59 +0000 (0:00:00.235) 0:00:00.235 ******* 2026-03-11 01:06:37.547337 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:06:37.547341 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:06:37.547344 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:06:37.547347 | orchestrator | 2026-03-11 01:06:37.547351 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:06:37.547354 | orchestrator | Wednesday 11 March 2026 01:03:59 +0000 (0:00:00.272) 0:00:00.508 ******* 2026-03-11 01:06:37.547357 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2026-03-11 01:06:37.547360 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2026-03-11 01:06:37.547363 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2026-03-11 01:06:37.547366 | orchestrator | 2026-03-11 01:06:37.547369 | orchestrator | PLAY [Apply role glance] ******************************************************* 2026-03-11 01:06:37.547372 | orchestrator | 2026-03-11 01:06:37.547376 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-11 01:06:37.547379 | orchestrator | Wednesday 11 March 2026 01:04:00 +0000 (0:00:00.345) 0:00:00.854 ******* 2026-03-11 01:06:37.547382 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:06:37.547385 | orchestrator | 2026-03-11 01:06:37.547388 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2026-03-11 01:06:37.547391 | orchestrator | Wednesday 11 March 2026 01:04:00 +0000 (0:00:00.492) 0:00:01.346 ******* 2026-03-11 01:06:37.547394 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2026-03-11 01:06:37.547398 | orchestrator | 2026-03-11 01:06:37.547401 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2026-03-11 01:06:37.547404 | orchestrator | Wednesday 11 March 2026 01:04:04 +0000 (0:00:03.479) 0:00:04.825 ******* 2026-03-11 01:06:37.547407 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2026-03-11 01:06:37.547410 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2026-03-11 01:06:37.547413 | orchestrator | 2026-03-11 01:06:37.547416 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2026-03-11 01:06:37.547419 | orchestrator | Wednesday 11 March 2026 01:04:10 +0000 (0:00:06.111) 0:00:10.937 ******* 2026-03-11 01:06:37.547422 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:06:37.547426 | orchestrator | 2026-03-11 01:06:37.547429 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2026-03-11 01:06:37.547432 | orchestrator | Wednesday 11 March 2026 01:04:13 +0000 (0:00:02.937) 0:00:13.874 ******* 2026-03-11 01:06:37.547435 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2026-03-11 01:06:37.547438 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:06:37.547441 | orchestrator | 2026-03-11 01:06:37.547444 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2026-03-11 01:06:37.547447 | orchestrator | Wednesday 11 March 2026 01:04:16 +0000 (0:00:03.620) 0:00:17.495 ******* 2026-03-11 01:06:37.547461 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:06:37.547464 | orchestrator | 2026-03-11 01:06:37.547467 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2026-03-11 01:06:37.547470 | orchestrator | Wednesday 11 March 2026 01:04:20 +0000 (0:00:03.634) 0:00:21.129 ******* 2026-03-11 01:06:37.547474 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2026-03-11 01:06:37.547477 | orchestrator | 2026-03-11 01:06:37.547487 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2026-03-11 01:06:37.547490 | orchestrator | Wednesday 11 March 2026 01:04:24 +0000 (0:00:04.296) 0:00:25.426 ******* 2026-03-11 01:06:37.547503 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:06:37.547508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:06:37.547517 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:06:37.547521 | orchestrator | 2026-03-11 01:06:37.547524 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-11 01:06:37.547529 | orchestrator | Wednesday 11 March 2026 01:04:27 +0000 (0:00:02.893) 0:00:28.319 ******* 2026-03-11 01:06:37.547536 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:06:37.547544 | orchestrator | 2026-03-11 01:06:37.547549 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2026-03-11 01:06:37.547557 | orchestrator | Wednesday 11 March 2026 01:04:28 +0000 (0:00:00.700) 0:00:29.020 ******* 2026-03-11 01:06:37.547563 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:37.547567 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:37.547572 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:37.547577 | orchestrator | 2026-03-11 01:06:37.547581 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2026-03-11 01:06:37.547586 | orchestrator | Wednesday 11 March 2026 01:04:31 +0000 (0:00:03.425) 0:00:32.446 ******* 2026-03-11 01:06:37.547590 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:06:37.547595 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:06:37.547600 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:06:37.547604 | orchestrator | 2026-03-11 01:06:37.547616 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2026-03-11 01:06:37.547621 | orchestrator | Wednesday 11 March 2026 01:04:33 +0000 (0:00:01.579) 0:00:34.025 ******* 2026-03-11 01:06:37.547626 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:06:37.547632 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:06:37.547637 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:06:37.547642 | orchestrator | 2026-03-11 01:06:37.547647 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2026-03-11 01:06:37.547653 | orchestrator | Wednesday 11 March 2026 01:04:34 +0000 (0:00:01.037) 0:00:35.063 ******* 2026-03-11 01:06:37.547662 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:06:37.547667 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:06:37.547672 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:06:37.547678 | orchestrator | 2026-03-11 01:06:37.547683 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2026-03-11 01:06:37.547688 | orchestrator | Wednesday 11 March 2026 01:04:35 +0000 (0:00:00.818) 0:00:35.881 ******* 2026-03-11 01:06:37.547693 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:37.547699 | orchestrator | 2026-03-11 01:06:37.547704 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2026-03-11 01:06:37.547710 | orchestrator | Wednesday 11 March 2026 01:04:35 +0000 (0:00:00.138) 0:00:36.019 ******* 2026-03-11 01:06:37.547715 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:37.547719 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:37.547722 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:37.547782 | orchestrator | 2026-03-11 01:06:37.547786 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-11 01:06:37.547790 | orchestrator | Wednesday 11 March 2026 01:04:35 +0000 (0:00:00.295) 0:00:36.315 ******* 2026-03-11 01:06:37.547793 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:06:37.547796 | orchestrator | 2026-03-11 01:06:37.547799 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2026-03-11 01:06:37.547802 | orchestrator | Wednesday 11 March 2026 01:04:36 +0000 (0:00:00.513) 0:00:36.829 ******* 2026-03-11 01:06:37.547809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:06:37.547817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:06:37.547827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:06:37.547830 | orchestrator | 2026-03-11 01:06:37.547834 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2026-03-11 01:06:37.547837 | orchestrator | Wednesday 11 March 2026 01:04:39 +0000 (0:00:03.816) 0:00:40.645 ******* 2026-03-11 01:06:37.547843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 01:06:37.547850 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:37.547855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 01:06:37.547861 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:37.547870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 01:06:37.547876 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:37.547880 | orchestrator | 2026-03-11 01:06:37.547883 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2026-03-11 01:06:37.547886 | orchestrator | Wednesday 11 March 2026 01:04:44 +0000 (0:00:04.242) 0:00:44.888 ******* 2026-03-11 01:06:37.547889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 01:06:37.547893 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:37.547898 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 01:06:37.547901 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:37.547907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2026-03-11 01:06:37.547912 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:37.547916 | orchestrator | 2026-03-11 01:06:37.547919 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2026-03-11 01:06:37.547922 | orchestrator | Wednesday 11 March 2026 01:04:47 +0000 (0:00:03.435) 0:00:48.324 ******* 2026-03-11 01:06:37.547925 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:37.547928 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:37.547931 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:37.547934 | orchestrator | 2026-03-11 01:06:37.547937 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2026-03-11 01:06:37.547940 | orchestrator | Wednesday 11 March 2026 01:04:50 +0000 (0:00:03.166) 0:00:51.490 ******* 2026-03-11 01:06:37.547947 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:06:37.547954 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:06:37.548054 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:06:37.548059 | orchestrator | 2026-03-11 01:06:37.548062 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2026-03-11 01:06:37.548065 | orchestrator | Wednesday 11 March 2026 01:04:55 +0000 (0:00:04.509) 0:00:56.000 ******* 2026-03-11 01:06:37.548069 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:37.548072 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:37.548076 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:37.548079 | orchestrator | 2026-03-11 01:06:37.548082 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2026-03-11 01:06:37.548085 | orchestrator | Wednesday 11 March 2026 01:05:00 +0000 (0:00:05.050) 0:01:01.050 ******* 2026-03-11 01:06:37.548088 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:37.548094 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:37.548097 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:37.548100 | orchestrator | 2026-03-11 01:06:37.548103 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2026-03-11 01:06:37.548106 | orchestrator | Wednesday 11 March 2026 01:05:04 +0000 (0:00:03.752) 0:01:04.803 ******* 2026-03-11 01:06:37.548109 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:37.548113 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:37.548116 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:37.548119 | orchestrator | 2026-03-11 01:06:37.548122 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2026-03-11 01:06:37.548125 | orchestrator | Wednesday 11 March 2026 01:05:07 +0000 (0:00:03.027) 0:01:07.831 ******* 2026-03-11 01:06:37.548128 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:37.548134 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:37.548137 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:37.548141 | orchestrator | 2026-03-11 01:06:37.548144 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2026-03-11 01:06:37.548147 | orchestrator | Wednesday 11 March 2026 01:05:11 +0000 (0:00:04.324) 0:01:12.155 ******* 2026-03-11 01:06:37.548150 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:37.548153 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:37.548156 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:37.548159 | orchestrator | 2026-03-11 01:06:37.548162 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2026-03-11 01:06:37.548166 | orchestrator | Wednesday 11 March 2026 01:05:14 +0000 (0:00:03.438) 0:01:15.594 ******* 2026-03-11 01:06:37.548169 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:37.548172 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:37.548175 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:37.548178 | orchestrator | 2026-03-11 01:06:37.548181 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2026-03-11 01:06:37.548184 | orchestrator | Wednesday 11 March 2026 01:05:15 +0000 (0:00:00.276) 0:01:15.871 ******* 2026-03-11 01:06:37.548187 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-11 01:06:37.548191 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:37.548194 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-11 01:06:37.548197 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:37.548200 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2026-03-11 01:06:37.548203 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:37.548207 | orchestrator | 2026-03-11 01:06:37.548210 | orchestrator | TASK [glance : Generating 'hostnqn' file for glance_api] *********************** 2026-03-11 01:06:37.548213 | orchestrator | Wednesday 11 March 2026 01:05:18 +0000 (0:00:03.745) 0:01:19.616 ******* 2026-03-11 01:06:37.548216 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:37.548219 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:37.548222 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:37.548225 | orchestrator | 2026-03-11 01:06:37.548229 | orchestrator | TASK [glance : Check glance containers] **************************************** 2026-03-11 01:06:37.548232 | orchestrator | Wednesday 11 March 2026 01:05:23 +0000 (0:00:04.198) 0:01:23.815 ******* 2026-03-11 01:06:37.548237 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:06:37.548247 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:06:37.548251 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', '', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2026-03-11 01:06:37.548257 | orchestrator | 2026-03-11 01:06:37.548261 | orchestrator | TASK [glance : include_tasks] ************************************************** 2026-03-11 01:06:37.548265 | orchestrator | Wednesday 11 March 2026 01:05:26 +0000 (0:00:03.583) 0:01:27.399 ******* 2026-03-11 01:06:37.548268 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:37.548271 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:37.548274 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:37.548277 | orchestrator | 2026-03-11 01:06:37.548280 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2026-03-11 01:06:37.548283 | orchestrator | Wednesday 11 March 2026 01:05:26 +0000 (0:00:00.265) 0:01:27.665 ******* 2026-03-11 01:06:37.548286 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:37.548289 | orchestrator | 2026-03-11 01:06:37.548292 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2026-03-11 01:06:37.548296 | orchestrator | Wednesday 11 March 2026 01:05:28 +0000 (0:00:01.981) 0:01:29.646 ******* 2026-03-11 01:06:37.548299 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:37.548302 | orchestrator | 2026-03-11 01:06:37.548305 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2026-03-11 01:06:37.548308 | orchestrator | Wednesday 11 March 2026 01:05:31 +0000 (0:00:02.401) 0:01:32.048 ******* 2026-03-11 01:06:37.548311 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:37.548314 | orchestrator | 2026-03-11 01:06:37.548317 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2026-03-11 01:06:37.548320 | orchestrator | Wednesday 11 March 2026 01:05:33 +0000 (0:00:02.314) 0:01:34.362 ******* 2026-03-11 01:06:37.548323 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:37.548326 | orchestrator | 2026-03-11 01:06:37.548329 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2026-03-11 01:06:37.548333 | orchestrator | Wednesday 11 March 2026 01:06:02 +0000 (0:00:29.202) 0:02:03.564 ******* 2026-03-11 01:06:37.548336 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:37.548339 | orchestrator | 2026-03-11 01:06:37.548342 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-11 01:06:37.548345 | orchestrator | Wednesday 11 March 2026 01:06:05 +0000 (0:00:02.305) 0:02:05.870 ******* 2026-03-11 01:06:37.548348 | orchestrator | 2026-03-11 01:06:37.548353 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-11 01:06:37.548357 | orchestrator | Wednesday 11 March 2026 01:06:05 +0000 (0:00:00.060) 0:02:05.930 ******* 2026-03-11 01:06:37.548360 | orchestrator | 2026-03-11 01:06:37.548363 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2026-03-11 01:06:37.548366 | orchestrator | Wednesday 11 March 2026 01:06:05 +0000 (0:00:00.067) 0:02:05.997 ******* 2026-03-11 01:06:37.548369 | orchestrator | 2026-03-11 01:06:37.548372 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2026-03-11 01:06:37.548375 | orchestrator | Wednesday 11 March 2026 01:06:05 +0000 (0:00:00.071) 0:02:06.069 ******* 2026-03-11 01:06:37.548379 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:37.548382 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:37.548385 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:37.548388 | orchestrator | 2026-03-11 01:06:37.548391 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:06:37.548395 | orchestrator | testbed-node-0 : ok=27  changed=19  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2026-03-11 01:06:37.548399 | orchestrator | testbed-node-1 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-11 01:06:37.548404 | orchestrator | testbed-node-2 : ok=16  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2026-03-11 01:06:37.548407 | orchestrator | 2026-03-11 01:06:37.548411 | orchestrator | 2026-03-11 01:06:37.548414 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:06:37.548417 | orchestrator | Wednesday 11 March 2026 01:06:34 +0000 (0:00:28.774) 0:02:34.844 ******* 2026-03-11 01:06:37.548420 | orchestrator | =============================================================================== 2026-03-11 01:06:37.548423 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 29.20s 2026-03-11 01:06:37.548426 | orchestrator | glance : Restart glance-api container ---------------------------------- 28.77s 2026-03-11 01:06:37.548429 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.11s 2026-03-11 01:06:37.548433 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.05s 2026-03-11 01:06:37.548436 | orchestrator | glance : Copying over config.json files for services -------------------- 4.51s 2026-03-11 01:06:37.548439 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 4.32s 2026-03-11 01:06:37.548442 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.30s 2026-03-11 01:06:37.548445 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 4.24s 2026-03-11 01:06:37.548448 | orchestrator | glance : Generating 'hostnqn' file for glance_api ----------------------- 4.20s 2026-03-11 01:06:37.548451 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.82s 2026-03-11 01:06:37.548454 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.75s 2026-03-11 01:06:37.548458 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.75s 2026-03-11 01:06:37.548461 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.63s 2026-03-11 01:06:37.548464 | orchestrator | service-ks-register : glance | Creating users --------------------------- 3.62s 2026-03-11 01:06:37.548467 | orchestrator | glance : Check glance containers ---------------------------------------- 3.58s 2026-03-11 01:06:37.548471 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.48s 2026-03-11 01:06:37.548474 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.44s 2026-03-11 01:06:37.548478 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 3.44s 2026-03-11 01:06:37.548481 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 3.43s 2026-03-11 01:06:37.548484 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.17s 2026-03-11 01:06:37.548487 | orchestrator | 2026-03-11 01:06:37 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:37.548490 | orchestrator | 2026-03-11 01:06:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:40.591433 | orchestrator | 2026-03-11 01:06:40 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:40.592072 | orchestrator | 2026-03-11 01:06:40 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:06:40.592562 | orchestrator | 2026-03-11 01:06:40 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:40.593497 | orchestrator | 2026-03-11 01:06:40 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:40.593523 | orchestrator | 2026-03-11 01:06:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:43.642064 | orchestrator | 2026-03-11 01:06:43 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:43.643634 | orchestrator | 2026-03-11 01:06:43 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:06:43.645200 | orchestrator | 2026-03-11 01:06:43 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:43.646489 | orchestrator | 2026-03-11 01:06:43 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:43.646524 | orchestrator | 2026-03-11 01:06:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:46.684828 | orchestrator | 2026-03-11 01:06:46 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:46.687375 | orchestrator | 2026-03-11 01:06:46 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:06:46.688615 | orchestrator | 2026-03-11 01:06:46 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:46.690693 | orchestrator | 2026-03-11 01:06:46 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:46.690766 | orchestrator | 2026-03-11 01:06:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:49.734172 | orchestrator | 2026-03-11 01:06:49 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:49.735563 | orchestrator | 2026-03-11 01:06:49 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:06:49.736653 | orchestrator | 2026-03-11 01:06:49 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:49.738065 | orchestrator | 2026-03-11 01:06:49 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:49.738112 | orchestrator | 2026-03-11 01:06:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:52.777977 | orchestrator | 2026-03-11 01:06:52 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state STARTED 2026-03-11 01:06:52.779359 | orchestrator | 2026-03-11 01:06:52 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:06:52.781038 | orchestrator | 2026-03-11 01:06:52 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:52.782705 | orchestrator | 2026-03-11 01:06:52 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:52.783331 | orchestrator | 2026-03-11 01:06:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:55.813151 | orchestrator | 2026-03-11 01:06:55 | INFO  | Task e81cada9-9179-4dab-ad0c-48c412f36b1b is in state SUCCESS 2026-03-11 01:06:55.814042 | orchestrator | 2026-03-11 01:06:55.814081 | orchestrator | 2026-03-11 01:06:55.814090 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:06:55.814098 | orchestrator | 2026-03-11 01:06:55.814104 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:06:55.814111 | orchestrator | Wednesday 11 March 2026 01:04:12 +0000 (0:00:00.231) 0:00:00.231 ******* 2026-03-11 01:06:55.814116 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:06:55.814123 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:06:55.814128 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:06:55.814134 | orchestrator | 2026-03-11 01:06:55.814139 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:06:55.814145 | orchestrator | Wednesday 11 March 2026 01:04:12 +0000 (0:00:00.229) 0:00:00.460 ******* 2026-03-11 01:06:55.814161 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2026-03-11 01:06:55.814167 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2026-03-11 01:06:55.814173 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2026-03-11 01:06:55.814179 | orchestrator | 2026-03-11 01:06:55.814185 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2026-03-11 01:06:55.814191 | orchestrator | 2026-03-11 01:06:55.814197 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-11 01:06:55.814217 | orchestrator | Wednesday 11 March 2026 01:04:12 +0000 (0:00:00.361) 0:00:00.822 ******* 2026-03-11 01:06:55.814223 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:06:55.814229 | orchestrator | 2026-03-11 01:06:55.814234 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2026-03-11 01:06:55.814240 | orchestrator | Wednesday 11 March 2026 01:04:13 +0000 (0:00:00.539) 0:00:01.361 ******* 2026-03-11 01:06:55.814246 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2026-03-11 01:06:55.814252 | orchestrator | 2026-03-11 01:06:55.814258 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2026-03-11 01:06:55.814263 | orchestrator | Wednesday 11 March 2026 01:04:16 +0000 (0:00:03.116) 0:00:04.477 ******* 2026-03-11 01:06:55.814332 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2026-03-11 01:06:55.814339 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2026-03-11 01:06:55.814345 | orchestrator | 2026-03-11 01:06:55.814351 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2026-03-11 01:06:55.814644 | orchestrator | Wednesday 11 March 2026 01:04:23 +0000 (0:00:07.144) 0:00:11.622 ******* 2026-03-11 01:06:55.814657 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:06:55.814663 | orchestrator | 2026-03-11 01:06:55.814669 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2026-03-11 01:06:55.814674 | orchestrator | Wednesday 11 March 2026 01:04:26 +0000 (0:00:03.401) 0:00:15.023 ******* 2026-03-11 01:06:55.814679 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2026-03-11 01:06:55.814685 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:06:55.814691 | orchestrator | 2026-03-11 01:06:55.814697 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2026-03-11 01:06:55.814702 | orchestrator | Wednesday 11 March 2026 01:04:30 +0000 (0:00:03.695) 0:00:18.719 ******* 2026-03-11 01:06:55.814708 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:06:55.814713 | orchestrator | 2026-03-11 01:06:55.814718 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2026-03-11 01:06:55.814724 | orchestrator | Wednesday 11 March 2026 01:04:33 +0000 (0:00:03.163) 0:00:21.882 ******* 2026-03-11 01:06:55.814729 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2026-03-11 01:06:55.814734 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2026-03-11 01:06:55.814740 | orchestrator | 2026-03-11 01:06:55.814745 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2026-03-11 01:06:55.814751 | orchestrator | Wednesday 11 March 2026 01:04:40 +0000 (0:00:06.746) 0:00:28.628 ******* 2026-03-11 01:06:55.814758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.814786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.814805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.814812 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.814819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.814825 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.814831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.814853 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.814862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.814868 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.814874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.814880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.814885 | orchestrator | 2026-03-11 01:06:55.814891 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-11 01:06:55.814896 | orchestrator | Wednesday 11 March 2026 01:04:43 +0000 (0:00:02.650) 0:00:31.279 ******* 2026-03-11 01:06:55.814906 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:55.814911 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:55.814917 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:55.814922 | orchestrator | 2026-03-11 01:06:55.814928 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-11 01:06:55.814945 | orchestrator | Wednesday 11 March 2026 01:04:43 +0000 (0:00:00.524) 0:00:31.803 ******* 2026-03-11 01:06:55.814950 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:06:55.814955 | orchestrator | 2026-03-11 01:06:55.814960 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2026-03-11 01:06:55.814966 | orchestrator | Wednesday 11 March 2026 01:04:44 +0000 (0:00:00.619) 0:00:32.422 ******* 2026-03-11 01:06:55.814985 | orchestrator | changed: [testbed-node-0] => (item=cinder-volume) 2026-03-11 01:06:55.814991 | orchestrator | changed: [testbed-node-2] => (item=cinder-volume) 2026-03-11 01:06:55.814997 | orchestrator | changed: [testbed-node-1] => (item=cinder-volume) 2026-03-11 01:06:55.815003 | orchestrator | changed: [testbed-node-0] => (item=cinder-backup) 2026-03-11 01:06:55.815008 | orchestrator | changed: [testbed-node-2] => (item=cinder-backup) 2026-03-11 01:06:55.815013 | orchestrator | changed: [testbed-node-1] => (item=cinder-backup) 2026-03-11 01:06:55.815054 | orchestrator | 2026-03-11 01:06:55.815062 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2026-03-11 01:06:55.815068 | orchestrator | Wednesday 11 March 2026 01:04:46 +0000 (0:00:01.921) 0:00:34.344 ******* 2026-03-11 01:06:55.815077 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-11 01:06:55.815084 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-11 01:06:55.815090 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-11 01:06:55.815100 | orchestrator | skipping: [testbed-node-1] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-11 01:06:55.815121 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-11 01:06:55.815130 | orchestrator | skipping: [testbed-node-2] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2026-03-11 01:06:55.815136 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-11 01:06:55.815141 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-11 01:06:55.815150 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-11 01:06:55.815171 | orchestrator | changed: [testbed-node-2] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-11 01:06:55.815180 | orchestrator | changed: [testbed-node-1] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-11 01:06:55.815186 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2026-03-11 01:06:55.815191 | orchestrator | 2026-03-11 01:06:55.815196 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2026-03-11 01:06:55.815202 | orchestrator | Wednesday 11 March 2026 01:04:49 +0000 (0:00:03.635) 0:00:37.979 ******* 2026-03-11 01:06:55.815207 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:06:55.815213 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:06:55.815218 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2026-03-11 01:06:55.815224 | orchestrator | 2026-03-11 01:06:55.815229 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2026-03-11 01:06:55.815235 | orchestrator | Wednesday 11 March 2026 01:04:51 +0000 (0:00:01.962) 0:00:39.942 ******* 2026-03-11 01:06:55.815240 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder.keyring) 2026-03-11 01:06:55.815249 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder.keyring) 2026-03-11 01:06:55.815254 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder.keyring) 2026-03-11 01:06:55.815259 | orchestrator | changed: [testbed-node-0] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:06:55.815264 | orchestrator | changed: [testbed-node-1] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:06:55.815270 | orchestrator | changed: [testbed-node-2] => (item=ceph.client.cinder-backup.keyring) 2026-03-11 01:06:55.815275 | orchestrator | 2026-03-11 01:06:55.815280 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2026-03-11 01:06:55.815285 | orchestrator | Wednesday 11 March 2026 01:04:54 +0000 (0:00:03.137) 0:00:43.080 ******* 2026-03-11 01:06:55.815291 | orchestrator | ok: [testbed-node-0] => (item=cinder-volume) 2026-03-11 01:06:55.815296 | orchestrator | ok: [testbed-node-1] => (item=cinder-volume) 2026-03-11 01:06:55.815301 | orchestrator | ok: [testbed-node-2] => (item=cinder-volume) 2026-03-11 01:06:55.815307 | orchestrator | ok: [testbed-node-0] => (item=cinder-backup) 2026-03-11 01:06:55.815313 | orchestrator | ok: [testbed-node-1] => (item=cinder-backup) 2026-03-11 01:06:55.815318 | orchestrator | ok: [testbed-node-2] => (item=cinder-backup) 2026-03-11 01:06:55.815324 | orchestrator | 2026-03-11 01:06:55.815329 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2026-03-11 01:06:55.815334 | orchestrator | Wednesday 11 March 2026 01:04:55 +0000 (0:00:00.959) 0:00:44.039 ******* 2026-03-11 01:06:55.815339 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:55.815344 | orchestrator | 2026-03-11 01:06:55.815349 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2026-03-11 01:06:55.815355 | orchestrator | Wednesday 11 March 2026 01:04:55 +0000 (0:00:00.088) 0:00:44.128 ******* 2026-03-11 01:06:55.815360 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:55.815366 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:55.815371 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:55.815377 | orchestrator | 2026-03-11 01:06:55.815382 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-11 01:06:55.815388 | orchestrator | Wednesday 11 March 2026 01:04:56 +0000 (0:00:00.249) 0:00:44.378 ******* 2026-03-11 01:06:55.815394 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:06:55.815399 | orchestrator | 2026-03-11 01:06:55.815405 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2026-03-11 01:06:55.815432 | orchestrator | Wednesday 11 March 2026 01:04:56 +0000 (0:00:00.608) 0:00:44.986 ******* 2026-03-11 01:06:55.815443 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.815451 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.815464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.815469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815494 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815511 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815541 | orchestrator | 2026-03-11 01:06:55.815549 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2026-03-11 01:06:55.815555 | orchestrator | Wednesday 11 March 2026 01:05:00 +0000 (0:00:03.813) 0:00:48.800 ******* 2026-03-11 01:06:55.815561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:06:55.815570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815589 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:55.815599 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:06:55.815607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815617 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815628 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:55.815634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:06:55.815640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815708 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815728 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:55.815734 | orchestrator | 2026-03-11 01:06:55.815740 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2026-03-11 01:06:55.815746 | orchestrator | Wednesday 11 March 2026 01:05:01 +0000 (0:00:00.874) 0:00:49.674 ******* 2026-03-11 01:06:55.815752 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:06:55.815758 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815779 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:55.815787 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:06:55.815797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815809 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815815 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:55.815821 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:06:55.815830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.815855 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:55.815861 | orchestrator | 2026-03-11 01:06:55.815867 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2026-03-11 01:06:55.815872 | orchestrator | Wednesday 11 March 2026 01:05:02 +0000 (0:00:01.413) 0:00:51.088 ******* 2026-03-11 01:06:55.815878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.815884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.815895 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.815905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815929 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815986 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.815992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816046 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816059 | orchestrator | 2026-03-11 01:06:55.816065 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2026-03-11 01:06:55.816071 | orchestrator | Wednesday 11 March 2026 01:05:06 +0000 (0:00:03.872) 0:00:54.961 ******* 2026-03-11 01:06:55.816077 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-11 01:06:55.816083 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-11 01:06:55.816088 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2026-03-11 01:06:55.816094 | orchestrator | 2026-03-11 01:06:55.816099 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2026-03-11 01:06:55.816105 | orchestrator | Wednesday 11 March 2026 01:05:08 +0000 (0:00:01.726) 0:00:56.688 ******* 2026-03-11 01:06:55.816114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.816126 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.816132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.816138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816163 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816172 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816205 | orchestrator | 2026-03-11 01:06:55.816211 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2026-03-11 01:06:55.816216 | orchestrator | Wednesday 11 March 2026 01:05:21 +0000 (0:00:13.274) 0:01:09.963 ******* 2026-03-11 01:06:55.816222 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:55.816229 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:55.816234 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:55.816240 | orchestrator | 2026-03-11 01:06:55.816246 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2026-03-11 01:06:55.816254 | orchestrator | Wednesday 11 March 2026 01:05:23 +0000 (0:00:01.468) 0:01:11.432 ******* 2026-03-11 01:06:55.816262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:06:55.816269 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.816275 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.816281 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.816290 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:55.816297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:06:55.816306 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.816322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.816329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.816335 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:55.816341 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2026-03-11 01:06:55.816352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.816358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.816371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2026-03-11 01:06:55.816379 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:55.816386 | orchestrator | 2026-03-11 01:06:55.816393 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2026-03-11 01:06:55.816400 | orchestrator | Wednesday 11 March 2026 01:05:24 +0000 (0:00:00.877) 0:01:12.310 ******* 2026-03-11 01:06:55.816406 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:55.816413 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:55.816419 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:55.816426 | orchestrator | 2026-03-11 01:06:55.816432 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2026-03-11 01:06:55.816439 | orchestrator | Wednesday 11 March 2026 01:05:24 +0000 (0:00:00.282) 0:01:12.592 ******* 2026-03-11 01:06:55.816445 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.816453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.816464 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2026-03-11 01:06:55.816475 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816485 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816493 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816511 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816527 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816538 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2026-03-11 01:06:55.816552 | orchestrator | 2026-03-11 01:06:55.816558 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2026-03-11 01:06:55.816565 | orchestrator | Wednesday 11 March 2026 01:05:27 +0000 (0:00:03.175) 0:01:15.768 ******* 2026-03-11 01:06:55.816572 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:55.816582 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:06:55.816588 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:06:55.816594 | orchestrator | 2026-03-11 01:06:55.816601 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2026-03-11 01:06:55.816607 | orchestrator | Wednesday 11 March 2026 01:05:27 +0000 (0:00:00.384) 0:01:16.152 ******* 2026-03-11 01:06:55.816617 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:55.816628 | orchestrator | 2026-03-11 01:06:55.816640 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2026-03-11 01:06:55.816652 | orchestrator | Wednesday 11 March 2026 01:05:29 +0000 (0:00:01.896) 0:01:18.048 ******* 2026-03-11 01:06:55.816664 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:55.816671 | orchestrator | 2026-03-11 01:06:55.816677 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2026-03-11 01:06:55.816683 | orchestrator | Wednesday 11 March 2026 01:05:32 +0000 (0:00:02.424) 0:01:20.473 ******* 2026-03-11 01:06:55.816689 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:55.816696 | orchestrator | 2026-03-11 01:06:55.816703 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-11 01:06:55.816710 | orchestrator | Wednesday 11 March 2026 01:05:54 +0000 (0:00:22.624) 0:01:43.097 ******* 2026-03-11 01:06:55.816716 | orchestrator | 2026-03-11 01:06:55.816723 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-11 01:06:55.816730 | orchestrator | Wednesday 11 March 2026 01:05:55 +0000 (0:00:00.068) 0:01:43.165 ******* 2026-03-11 01:06:55.816736 | orchestrator | 2026-03-11 01:06:55.816743 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2026-03-11 01:06:55.816750 | orchestrator | Wednesday 11 March 2026 01:05:55 +0000 (0:00:00.064) 0:01:43.230 ******* 2026-03-11 01:06:55.816755 | orchestrator | 2026-03-11 01:06:55.816761 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2026-03-11 01:06:55.816767 | orchestrator | Wednesday 11 March 2026 01:05:55 +0000 (0:00:00.066) 0:01:43.296 ******* 2026-03-11 01:06:55.816773 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:55.816778 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:55.816784 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:55.816790 | orchestrator | 2026-03-11 01:06:55.816796 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2026-03-11 01:06:55.816855 | orchestrator | Wednesday 11 March 2026 01:06:18 +0000 (0:00:22.933) 0:02:06.230 ******* 2026-03-11 01:06:55.816861 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:55.816911 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:55.816919 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:55.816925 | orchestrator | 2026-03-11 01:06:55.816941 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2026-03-11 01:06:55.816947 | orchestrator | Wednesday 11 March 2026 01:06:28 +0000 (0:00:10.109) 0:02:16.340 ******* 2026-03-11 01:06:55.816953 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:55.816958 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:55.816963 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:55.816969 | orchestrator | 2026-03-11 01:06:55.816974 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2026-03-11 01:06:55.816980 | orchestrator | Wednesday 11 March 2026 01:06:44 +0000 (0:00:16.558) 0:02:32.898 ******* 2026-03-11 01:06:55.816985 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:06:55.816991 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:06:55.816997 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:06:55.817002 | orchestrator | 2026-03-11 01:06:55.817008 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2026-03-11 01:06:55.817018 | orchestrator | Wednesday 11 March 2026 01:06:53 +0000 (0:00:08.578) 0:02:41.477 ******* 2026-03-11 01:06:55.817024 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:06:55.817030 | orchestrator | 2026-03-11 01:06:55.817035 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:06:55.817047 | orchestrator | testbed-node-0 : ok=30  changed=22  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2026-03-11 01:06:55.817053 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:06:55.817063 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:06:55.817070 | orchestrator | 2026-03-11 01:06:55.817075 | orchestrator | 2026-03-11 01:06:55.817081 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:06:55.817087 | orchestrator | Wednesday 11 March 2026 01:06:53 +0000 (0:00:00.238) 0:02:41.716 ******* 2026-03-11 01:06:55.817093 | orchestrator | =============================================================================== 2026-03-11 01:06:55.817099 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 22.93s 2026-03-11 01:06:55.817105 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 22.62s 2026-03-11 01:06:55.817110 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 16.56s 2026-03-11 01:06:55.817115 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 13.27s 2026-03-11 01:06:55.817120 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 10.11s 2026-03-11 01:06:55.817125 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 8.58s 2026-03-11 01:06:55.817130 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 7.14s 2026-03-11 01:06:55.817135 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 6.75s 2026-03-11 01:06:55.817141 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.87s 2026-03-11 01:06:55.817147 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 3.81s 2026-03-11 01:06:55.817152 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.70s 2026-03-11 01:06:55.817158 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 3.64s 2026-03-11 01:06:55.817163 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.40s 2026-03-11 01:06:55.817169 | orchestrator | cinder : Check cinder containers ---------------------------------------- 3.18s 2026-03-11 01:06:55.817174 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.16s 2026-03-11 01:06:55.817180 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.14s 2026-03-11 01:06:55.817185 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.12s 2026-03-11 01:06:55.817190 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.65s 2026-03-11 01:06:55.817195 | orchestrator | cinder : Creating Cinder database user and setting permissions ---------- 2.42s 2026-03-11 01:06:55.817200 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 1.96s 2026-03-11 01:06:55.817205 | orchestrator | 2026-03-11 01:06:55 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:06:55.817210 | orchestrator | 2026-03-11 01:06:55 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:55.817215 | orchestrator | 2026-03-11 01:06:55 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:55.817220 | orchestrator | 2026-03-11 01:06:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:06:58.860209 | orchestrator | 2026-03-11 01:06:58 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:06:58.862978 | orchestrator | 2026-03-11 01:06:58 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:06:58.864098 | orchestrator | 2026-03-11 01:06:58 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:06:58.864991 | orchestrator | 2026-03-11 01:06:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:01.905891 | orchestrator | 2026-03-11 01:07:01 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:01.907603 | orchestrator | 2026-03-11 01:07:01 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:01.910034 | orchestrator | 2026-03-11 01:07:01 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:07:01.910093 | orchestrator | 2026-03-11 01:07:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:04.956724 | orchestrator | 2026-03-11 01:07:04 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:04.958626 | orchestrator | 2026-03-11 01:07:04 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:04.960094 | orchestrator | 2026-03-11 01:07:04 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state STARTED 2026-03-11 01:07:04.960137 | orchestrator | 2026-03-11 01:07:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:08.016966 | orchestrator | 2026-03-11 01:07:08 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:08.017060 | orchestrator | 2026-03-11 01:07:08 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:08.019194 | orchestrator | 2026-03-11 01:07:08 | INFO  | Task 4b0d71c2-62d5-432a-9e55-dabb4f4f50fe is in state SUCCESS 2026-03-11 01:07:08.022637 | orchestrator | 2026-03-11 01:07:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:08.024202 | orchestrator | 2026-03-11 01:07:08.024688 | orchestrator | 2026-03-11 01:07:08.024709 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:07:08.024721 | orchestrator | 2026-03-11 01:07:08.024732 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:07:08.024770 | orchestrator | Wednesday 11 March 2026 01:05:06 +0000 (0:00:00.238) 0:00:00.238 ******* 2026-03-11 01:07:08.024783 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:07:08.024795 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:07:08.024806 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:07:08.024817 | orchestrator | 2026-03-11 01:07:08.025331 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:07:08.025351 | orchestrator | Wednesday 11 March 2026 01:05:06 +0000 (0:00:00.295) 0:00:00.533 ******* 2026-03-11 01:07:08.025362 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2026-03-11 01:07:08.025374 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2026-03-11 01:07:08.025384 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2026-03-11 01:07:08.025395 | orchestrator | 2026-03-11 01:07:08.025406 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2026-03-11 01:07:08.025417 | orchestrator | 2026-03-11 01:07:08.025428 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-11 01:07:08.025438 | orchestrator | Wednesday 11 March 2026 01:05:07 +0000 (0:00:00.370) 0:00:00.904 ******* 2026-03-11 01:07:08.025450 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:07:08.025461 | orchestrator | 2026-03-11 01:07:08.025472 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2026-03-11 01:07:08.025483 | orchestrator | Wednesday 11 March 2026 01:05:07 +0000 (0:00:00.546) 0:00:01.451 ******* 2026-03-11 01:07:08.025497 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.025535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.025547 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.025559 | orchestrator | 2026-03-11 01:07:08.025571 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2026-03-11 01:07:08.025582 | orchestrator | Wednesday 11 March 2026 01:05:08 +0000 (0:00:00.884) 0:00:02.336 ******* 2026-03-11 01:07:08.025593 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2026-03-11 01:07:08.025604 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2026-03-11 01:07:08.025615 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:07:08.025626 | orchestrator | 2026-03-11 01:07:08.025637 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2026-03-11 01:07:08.025648 | orchestrator | Wednesday 11 March 2026 01:05:09 +0000 (0:00:01.309) 0:00:03.645 ******* 2026-03-11 01:07:08.025659 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:07:08.025670 | orchestrator | 2026-03-11 01:07:08.025681 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2026-03-11 01:07:08.025691 | orchestrator | Wednesday 11 March 2026 01:05:11 +0000 (0:00:01.107) 0:00:04.753 ******* 2026-03-11 01:07:08.025760 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.025788 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.025821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.025840 | orchestrator | 2026-03-11 01:07:08.025858 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2026-03-11 01:07:08.025875 | orchestrator | Wednesday 11 March 2026 01:05:12 +0000 (0:00:01.434) 0:00:06.187 ******* 2026-03-11 01:07:08.025892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 01:07:08.025909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 01:07:08.025952 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:08.025970 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:08.026101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 01:07:08.026133 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:08.026151 | orchestrator | 2026-03-11 01:07:08.026169 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2026-03-11 01:07:08.026186 | orchestrator | Wednesday 11 March 2026 01:05:13 +0000 (0:00:00.533) 0:00:06.721 ******* 2026-03-11 01:07:08.026204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 01:07:08.026240 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:08.026260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 01:07:08.026280 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:08.026299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2026-03-11 01:07:08.026318 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:08.026330 | orchestrator | 2026-03-11 01:07:08.026341 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2026-03-11 01:07:08.026352 | orchestrator | Wednesday 11 March 2026 01:05:14 +0000 (0:00:01.456) 0:00:08.177 ******* 2026-03-11 01:07:08.026363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.026378 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.026457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.026497 | orchestrator | 2026-03-11 01:07:08.026515 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2026-03-11 01:07:08.026534 | orchestrator | Wednesday 11 March 2026 01:05:15 +0000 (0:00:01.376) 0:00:09.553 ******* 2026-03-11 01:07:08.026551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.026571 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.026591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.026609 | orchestrator | 2026-03-11 01:07:08.026629 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2026-03-11 01:07:08.026646 | orchestrator | Wednesday 11 March 2026 01:05:17 +0000 (0:00:01.603) 0:00:11.157 ******* 2026-03-11 01:07:08.026665 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:08.026683 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:08.026703 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:08.026722 | orchestrator | 2026-03-11 01:07:08.026740 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2026-03-11 01:07:08.026763 | orchestrator | Wednesday 11 March 2026 01:05:18 +0000 (0:00:00.672) 0:00:11.829 ******* 2026-03-11 01:07:08.026790 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-11 01:07:08.026810 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-11 01:07:08.026828 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2026-03-11 01:07:08.026846 | orchestrator | 2026-03-11 01:07:08.026864 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2026-03-11 01:07:08.026882 | orchestrator | Wednesday 11 March 2026 01:05:19 +0000 (0:00:01.409) 0:00:13.239 ******* 2026-03-11 01:07:08.026900 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-11 01:07:08.026957 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-11 01:07:08.026978 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2026-03-11 01:07:08.026999 | orchestrator | 2026-03-11 01:07:08.027018 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2026-03-11 01:07:08.027038 | orchestrator | Wednesday 11 March 2026 01:05:21 +0000 (0:00:01.522) 0:00:14.764 ******* 2026-03-11 01:07:08.027110 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:07:08.027124 | orchestrator | 2026-03-11 01:07:08.027135 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2026-03-11 01:07:08.027146 | orchestrator | Wednesday 11 March 2026 01:05:21 +0000 (0:00:00.662) 0:00:15.427 ******* 2026-03-11 01:07:08.027157 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2026-03-11 01:07:08.027167 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2026-03-11 01:07:08.027179 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:07:08.027190 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:07:08.027201 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:07:08.027212 | orchestrator | 2026-03-11 01:07:08.027222 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2026-03-11 01:07:08.027233 | orchestrator | Wednesday 11 March 2026 01:05:22 +0000 (0:00:00.615) 0:00:16.042 ******* 2026-03-11 01:07:08.027244 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:08.027255 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:08.027266 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:08.027277 | orchestrator | 2026-03-11 01:07:08.027288 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2026-03-11 01:07:08.027299 | orchestrator | Wednesday 11 March 2026 01:05:22 +0000 (0:00:00.369) 0:00:16.411 ******* 2026-03-11 01:07:08.027311 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1103286, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0321105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1103286, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0321105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027337 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 121701, 'inode': 1103286, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0321105, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1103341, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0420182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1103341, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0420182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027411 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfsdashboard.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfsdashboard.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 143913, 'inode': 1103341, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0420182, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1103405, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.057163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027434 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1103405, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.057163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26019, 'inode': 1103405, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.057163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027457 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1103334, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.040018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027477 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1103334, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.040018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027515 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1103334, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.040018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1103410, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027540 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1103410, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 170293, 'inode': 1103410, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0584726, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027563 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1103303, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0361183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1103303, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0361183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof-performance.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof-performance.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 33297, 'inode': 1103303, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0361183, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1103368, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0460184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1103368, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0460184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26346, 'inode': 1103368, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0460184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1103393, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0527384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1103393, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0527384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027722 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 46110, 'inode': 1103393, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0527384, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027735 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1103285, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0276487, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027746 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1103285, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0276487, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1103285, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0276487, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027769 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1103300, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.033041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027788 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1103300, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.033041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1103300, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.033041, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027835 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1103337, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.040018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027848 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1103337, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.040018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1103337, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.040018, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1103387, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0510185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1103387, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0510185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027902 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19231, 'inode': 1103387, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0510185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027962 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1103403, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0564425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1103403, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0564425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.027988 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13320, 'inode': 1103403, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0564425, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028007 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1103328, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0395756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1103328, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0395756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1103328, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0395756, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1103392, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0522885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1103392, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0522885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028161 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 20042, 'inode': 1103392, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0522885, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028180 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1103418, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0587757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028208 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1103418, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0587757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028228 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/smb-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/smb-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29877, 'inode': 1103418, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0587757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1103371, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0500512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028277 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1103371, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0500512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028298 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38375, 'inode': 1103371, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0500512, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1103363, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0457163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028347 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1103363, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0457163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 63043, 'inode': 1103363, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0457163, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028387 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1103356, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0446637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028423 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1103356, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0446637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27387, 'inode': 1103356, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0446637, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1103389, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0510185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028469 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1103389, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0510185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028497 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1103347, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0438533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49016, 'inode': 1103389, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0510185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028549 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1103347, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0438533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028580 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1103395, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0560184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 43303, 'inode': 1103347, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0438533, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028617 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1103395, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0560184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028649 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1103324, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0384514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028670 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16614, 'inode': 1103395, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0560184, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1103324, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0384514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1103564, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1070192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-nvmeof.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-nvmeof.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 52667, 'inode': 1103324, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0384514, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1103564, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1070192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1103451, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0730188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028776 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1103564, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1070192, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1103451, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0730188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028799 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1103426, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0630186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1103451, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0730188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028829 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1103426, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0630186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028847 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1103466, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0750186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028859 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1103426, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0630186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028870 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1103466, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0750186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1103422, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0591917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028900 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15767, 'inode': 1103466, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0750186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1103422, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0591917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.028992 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1103515, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0890188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029017 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1103422, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0591917, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1103515, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0890188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029055 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1103468, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.084019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029084 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1103515, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0890188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1103468, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.084019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1103520, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0890188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1103468, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.084019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029177 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1103520, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0890188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029198 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1103562, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1060193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029224 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22303, 'inode': 1103520, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0890188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029244 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1103562, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1060193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029256 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1103511, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0877872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029267 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1103562, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1060193, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029279 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1103511, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0877872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029290 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1103461, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0740187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21194, 'inode': 1103511, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0877872, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1103461, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0740187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029335 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1103449, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0690186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029347 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1103461, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0740187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029358 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1103449, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0690186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029370 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1103460, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0730188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029381 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1103460, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0730188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029399 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1103449, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0690186, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1103435, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0680187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029428 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1103435, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0680187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029440 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1103460, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0730188, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029451 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1103463, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0746593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1103435, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0680187, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029481 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1103463, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0746593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1103530, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1051662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029512 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1103530, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1051662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029523 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15957, 'inode': 1103463, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0746593, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029535 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1103526, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.092019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029546 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1103530, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.1051662, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029569 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1103526, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.092019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029582 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1103423, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.059407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029598 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1103526, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.092019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1103423, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.059407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1103425, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0600185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029653 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1103423, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.059407, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1103425, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0600185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1103492, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.087209, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1103492, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.087209, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029752 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1103425, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.0600185, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1103523, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.090019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1103523, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.090019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1103492, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.087209, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21951, 'inode': 1103523, 'dev': 81, 'nlink': 1, 'atime': 1773187348.0, 'mtime': 1773187348.0, 'ctime': 1773188252.090019, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2026-03-11 01:07:08.029829 | orchestrator | 2026-03-11 01:07:08.029840 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2026-03-11 01:07:08.029852 | orchestrator | Wednesday 11 March 2026 01:06:00 +0000 (0:00:37.797) 0:00:54.209 ******* 2026-03-11 01:07:08.029863 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.029875 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.029886 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2026-03-11 01:07:08.029898 | orchestrator | 2026-03-11 01:07:08.029909 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2026-03-11 01:07:08.029991 | orchestrator | Wednesday 11 March 2026 01:06:01 +0000 (0:00:00.911) 0:00:55.120 ******* 2026-03-11 01:07:08.030060 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:08.030084 | orchestrator | 2026-03-11 01:07:08.030102 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2026-03-11 01:07:08.030119 | orchestrator | Wednesday 11 March 2026 01:06:03 +0000 (0:00:01.962) 0:00:57.082 ******* 2026-03-11 01:07:08.030135 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:08.030146 | orchestrator | 2026-03-11 01:07:08.030156 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-11 01:07:08.030165 | orchestrator | Wednesday 11 March 2026 01:06:05 +0000 (0:00:02.020) 0:00:59.103 ******* 2026-03-11 01:07:08.030175 | orchestrator | 2026-03-11 01:07:08.030185 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-11 01:07:08.030194 | orchestrator | Wednesday 11 March 2026 01:06:05 +0000 (0:00:00.064) 0:00:59.167 ******* 2026-03-11 01:07:08.030204 | orchestrator | 2026-03-11 01:07:08.030214 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2026-03-11 01:07:08.030223 | orchestrator | Wednesday 11 March 2026 01:06:05 +0000 (0:00:00.188) 0:00:59.356 ******* 2026-03-11 01:07:08.030233 | orchestrator | 2026-03-11 01:07:08.030242 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2026-03-11 01:07:08.030252 | orchestrator | Wednesday 11 March 2026 01:06:05 +0000 (0:00:00.059) 0:00:59.415 ******* 2026-03-11 01:07:08.030262 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:08.030272 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:08.030282 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:07:08.030291 | orchestrator | 2026-03-11 01:07:08.030302 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2026-03-11 01:07:08.030321 | orchestrator | Wednesday 11 March 2026 01:06:07 +0000 (0:00:01.858) 0:01:01.273 ******* 2026-03-11 01:07:08.030332 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:08.030342 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:08.030351 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2026-03-11 01:07:08.030362 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2026-03-11 01:07:08.030372 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:07:08.030382 | orchestrator | 2026-03-11 01:07:08.030392 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2026-03-11 01:07:08.030402 | orchestrator | Wednesday 11 March 2026 01:06:34 +0000 (0:00:26.699) 0:01:27.973 ******* 2026-03-11 01:07:08.030412 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:08.030421 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:07:08.030431 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:07:08.030441 | orchestrator | 2026-03-11 01:07:08.030450 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2026-03-11 01:07:08.030460 | orchestrator | Wednesday 11 March 2026 01:07:01 +0000 (0:00:27.473) 0:01:55.446 ******* 2026-03-11 01:07:08.030470 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:07:08.030479 | orchestrator | 2026-03-11 01:07:08.030489 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2026-03-11 01:07:08.030498 | orchestrator | Wednesday 11 March 2026 01:07:03 +0000 (0:00:02.136) 0:01:57.583 ******* 2026-03-11 01:07:08.030508 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:08.030518 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:07:08.030528 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:07:08.030537 | orchestrator | 2026-03-11 01:07:08.030547 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2026-03-11 01:07:08.030557 | orchestrator | Wednesday 11 March 2026 01:07:04 +0000 (0:00:00.476) 0:01:58.059 ******* 2026-03-11 01:07:08.030567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2026-03-11 01:07:08.030579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2026-03-11 01:07:08.030597 | orchestrator | 2026-03-11 01:07:08.030607 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2026-03-11 01:07:08.030617 | orchestrator | Wednesday 11 March 2026 01:07:06 +0000 (0:00:02.139) 0:02:00.198 ******* 2026-03-11 01:07:08.030627 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:07:08.030637 | orchestrator | 2026-03-11 01:07:08.030649 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:07:08.030666 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:07:08.030683 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:07:08.030701 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:07:08.030717 | orchestrator | 2026-03-11 01:07:08.030733 | orchestrator | 2026-03-11 01:07:08.030751 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:07:08.030767 | orchestrator | Wednesday 11 March 2026 01:07:06 +0000 (0:00:00.254) 0:02:00.453 ******* 2026-03-11 01:07:08.030783 | orchestrator | =============================================================================== 2026-03-11 01:07:08.030797 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 37.80s 2026-03-11 01:07:08.030811 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 27.47s 2026-03-11 01:07:08.030825 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.70s 2026-03-11 01:07:08.030840 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.14s 2026-03-11 01:07:08.030855 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.14s 2026-03-11 01:07:08.030870 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.02s 2026-03-11 01:07:08.030886 | orchestrator | grafana : Creating grafana database ------------------------------------- 1.96s 2026-03-11 01:07:08.030902 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.86s 2026-03-11 01:07:08.030938 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.60s 2026-03-11 01:07:08.030957 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.52s 2026-03-11 01:07:08.030974 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 1.46s 2026-03-11 01:07:08.030991 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.43s 2026-03-11 01:07:08.031008 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.41s 2026-03-11 01:07:08.031036 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.38s 2026-03-11 01:07:08.031054 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 1.31s 2026-03-11 01:07:08.031067 | orchestrator | grafana : include_tasks ------------------------------------------------- 1.11s 2026-03-11 01:07:08.031077 | orchestrator | grafana : Check grafana containers -------------------------------------- 0.91s 2026-03-11 01:07:08.031087 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.88s 2026-03-11 01:07:08.031096 | orchestrator | grafana : Copying over extra configuration file ------------------------- 0.67s 2026-03-11 01:07:08.031106 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.66s 2026-03-11 01:07:11.071329 | orchestrator | 2026-03-11 01:07:11 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:11.072366 | orchestrator | 2026-03-11 01:07:11 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:11.072399 | orchestrator | 2026-03-11 01:07:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:14.113175 | orchestrator | 2026-03-11 01:07:14 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:14.115374 | orchestrator | 2026-03-11 01:07:14 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:14.115454 | orchestrator | 2026-03-11 01:07:14 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:17.153681 | orchestrator | 2026-03-11 01:07:17 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:17.155817 | orchestrator | 2026-03-11 01:07:17 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:17.155941 | orchestrator | 2026-03-11 01:07:17 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:20.200767 | orchestrator | 2026-03-11 01:07:20 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:20.204201 | orchestrator | 2026-03-11 01:07:20 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:20.204249 | orchestrator | 2026-03-11 01:07:20 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:23.249588 | orchestrator | 2026-03-11 01:07:23 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:23.251812 | orchestrator | 2026-03-11 01:07:23 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:23.252485 | orchestrator | 2026-03-11 01:07:23 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:26.291042 | orchestrator | 2026-03-11 01:07:26 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:26.292942 | orchestrator | 2026-03-11 01:07:26 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:26.292993 | orchestrator | 2026-03-11 01:07:26 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:29.339519 | orchestrator | 2026-03-11 01:07:29 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:29.340264 | orchestrator | 2026-03-11 01:07:29 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:29.340352 | orchestrator | 2026-03-11 01:07:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:32.381029 | orchestrator | 2026-03-11 01:07:32 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:32.381104 | orchestrator | 2026-03-11 01:07:32 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:32.381113 | orchestrator | 2026-03-11 01:07:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:35.426469 | orchestrator | 2026-03-11 01:07:35 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:35.428717 | orchestrator | 2026-03-11 01:07:35 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:35.429563 | orchestrator | 2026-03-11 01:07:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:38.470433 | orchestrator | 2026-03-11 01:07:38 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:38.472232 | orchestrator | 2026-03-11 01:07:38 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:38.472311 | orchestrator | 2026-03-11 01:07:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:41.515480 | orchestrator | 2026-03-11 01:07:41 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:41.516657 | orchestrator | 2026-03-11 01:07:41 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:41.517186 | orchestrator | 2026-03-11 01:07:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:44.557716 | orchestrator | 2026-03-11 01:07:44 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:44.559395 | orchestrator | 2026-03-11 01:07:44 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:44.559428 | orchestrator | 2026-03-11 01:07:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:47.602637 | orchestrator | 2026-03-11 01:07:47 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:47.604616 | orchestrator | 2026-03-11 01:07:47 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:47.604656 | orchestrator | 2026-03-11 01:07:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:50.645803 | orchestrator | 2026-03-11 01:07:50 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:50.647915 | orchestrator | 2026-03-11 01:07:50 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:50.647962 | orchestrator | 2026-03-11 01:07:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:53.686644 | orchestrator | 2026-03-11 01:07:53 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:53.690645 | orchestrator | 2026-03-11 01:07:53 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:53.690717 | orchestrator | 2026-03-11 01:07:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:56.736833 | orchestrator | 2026-03-11 01:07:56 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:56.738523 | orchestrator | 2026-03-11 01:07:56 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:56.738577 | orchestrator | 2026-03-11 01:07:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:07:59.779362 | orchestrator | 2026-03-11 01:07:59 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:07:59.782653 | orchestrator | 2026-03-11 01:07:59 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:07:59.782781 | orchestrator | 2026-03-11 01:07:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:02.834050 | orchestrator | 2026-03-11 01:08:02 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:08:02.835465 | orchestrator | 2026-03-11 01:08:02 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:02.835502 | orchestrator | 2026-03-11 01:08:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:05.883187 | orchestrator | 2026-03-11 01:08:05 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:08:05.884514 | orchestrator | 2026-03-11 01:08:05 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:05.884556 | orchestrator | 2026-03-11 01:08:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:08.934702 | orchestrator | 2026-03-11 01:08:08 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:08:08.936187 | orchestrator | 2026-03-11 01:08:08 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:08.936294 | orchestrator | 2026-03-11 01:08:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:11.985246 | orchestrator | 2026-03-11 01:08:11 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:08:11.986952 | orchestrator | 2026-03-11 01:08:11 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:11.987017 | orchestrator | 2026-03-11 01:08:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:15.033080 | orchestrator | 2026-03-11 01:08:15 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:08:15.036025 | orchestrator | 2026-03-11 01:08:15 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:15.036288 | orchestrator | 2026-03-11 01:08:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:18.078197 | orchestrator | 2026-03-11 01:08:18 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:08:18.078599 | orchestrator | 2026-03-11 01:08:18 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:18.078619 | orchestrator | 2026-03-11 01:08:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:21.120700 | orchestrator | 2026-03-11 01:08:21 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:08:21.122513 | orchestrator | 2026-03-11 01:08:21 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:21.122577 | orchestrator | 2026-03-11 01:08:21 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:24.164994 | orchestrator | 2026-03-11 01:08:24 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:08:24.166447 | orchestrator | 2026-03-11 01:08:24 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:24.166558 | orchestrator | 2026-03-11 01:08:24 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:27.210606 | orchestrator | 2026-03-11 01:08:27 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:08:27.211989 | orchestrator | 2026-03-11 01:08:27 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:27.212022 | orchestrator | 2026-03-11 01:08:27 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:30.262620 | orchestrator | 2026-03-11 01:08:30 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:08:30.265942 | orchestrator | 2026-03-11 01:08:30 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:30.266006 | orchestrator | 2026-03-11 01:08:30 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:33.310589 | orchestrator | 2026-03-11 01:08:33 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:08:33.311371 | orchestrator | 2026-03-11 01:08:33 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:33.311418 | orchestrator | 2026-03-11 01:08:33 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:36.350501 | orchestrator | 2026-03-11 01:08:36 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state STARTED 2026-03-11 01:08:36.352314 | orchestrator | 2026-03-11 01:08:36 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:36.352479 | orchestrator | 2026-03-11 01:08:36 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:39.405853 | orchestrator | 2026-03-11 01:08:39 | INFO  | Task b211ca60-6d1d-4e81-af3c-431b9bc034d2 is in state SUCCESS 2026-03-11 01:08:39.407709 | orchestrator | 2026-03-11 01:08:39 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:08:39.409814 | orchestrator | 2026-03-11 01:08:39 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:39.409853 | orchestrator | 2026-03-11 01:08:39 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:42.446104 | orchestrator | 2026-03-11 01:08:42 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:08:42.449098 | orchestrator | 2026-03-11 01:08:42 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:42.451649 | orchestrator | 2026-03-11 01:08:42 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:45.494476 | orchestrator | 2026-03-11 01:08:45 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:08:45.495314 | orchestrator | 2026-03-11 01:08:45 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:45.495352 | orchestrator | 2026-03-11 01:08:45 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:48.537792 | orchestrator | 2026-03-11 01:08:48 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:08:48.540651 | orchestrator | 2026-03-11 01:08:48 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:48.540826 | orchestrator | 2026-03-11 01:08:48 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:51.578416 | orchestrator | 2026-03-11 01:08:51 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:08:51.578892 | orchestrator | 2026-03-11 01:08:51 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:51.578904 | orchestrator | 2026-03-11 01:08:51 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:54.615344 | orchestrator | 2026-03-11 01:08:54 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:08:54.615838 | orchestrator | 2026-03-11 01:08:54 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:54.615939 | orchestrator | 2026-03-11 01:08:54 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:08:57.653926 | orchestrator | 2026-03-11 01:08:57 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:08:57.654947 | orchestrator | 2026-03-11 01:08:57 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:08:57.654982 | orchestrator | 2026-03-11 01:08:57 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:00.694517 | orchestrator | 2026-03-11 01:09:00 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:00.696292 | orchestrator | 2026-03-11 01:09:00 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:00.696328 | orchestrator | 2026-03-11 01:09:00 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:03.734869 | orchestrator | 2026-03-11 01:09:03 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:03.737226 | orchestrator | 2026-03-11 01:09:03 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:03.737266 | orchestrator | 2026-03-11 01:09:03 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:06.783026 | orchestrator | 2026-03-11 01:09:06 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:06.783076 | orchestrator | 2026-03-11 01:09:06 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:06.783236 | orchestrator | 2026-03-11 01:09:06 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:09.825613 | orchestrator | 2026-03-11 01:09:09 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:09.827516 | orchestrator | 2026-03-11 01:09:09 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:09.827570 | orchestrator | 2026-03-11 01:09:09 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:12.873395 | orchestrator | 2026-03-11 01:09:12 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:12.873817 | orchestrator | 2026-03-11 01:09:12 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:12.873951 | orchestrator | 2026-03-11 01:09:12 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:15.921176 | orchestrator | 2026-03-11 01:09:15 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:15.923346 | orchestrator | 2026-03-11 01:09:15 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:15.923391 | orchestrator | 2026-03-11 01:09:15 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:18.969516 | orchestrator | 2026-03-11 01:09:18 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:18.971150 | orchestrator | 2026-03-11 01:09:18 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:18.971188 | orchestrator | 2026-03-11 01:09:18 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:22.014748 | orchestrator | 2026-03-11 01:09:22 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:22.015113 | orchestrator | 2026-03-11 01:09:22 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:22.015138 | orchestrator | 2026-03-11 01:09:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:25.071556 | orchestrator | 2026-03-11 01:09:25 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:25.073786 | orchestrator | 2026-03-11 01:09:25 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:25.073840 | orchestrator | 2026-03-11 01:09:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:28.116202 | orchestrator | 2026-03-11 01:09:28 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:28.118764 | orchestrator | 2026-03-11 01:09:28 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:28.118980 | orchestrator | 2026-03-11 01:09:28 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:31.154239 | orchestrator | 2026-03-11 01:09:31 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:31.154324 | orchestrator | 2026-03-11 01:09:31 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:31.154335 | orchestrator | 2026-03-11 01:09:31 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:34.193076 | orchestrator | 2026-03-11 01:09:34 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:34.193530 | orchestrator | 2026-03-11 01:09:34 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:34.193600 | orchestrator | 2026-03-11 01:09:34 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:37.236834 | orchestrator | 2026-03-11 01:09:37 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:37.238270 | orchestrator | 2026-03-11 01:09:37 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:37.238392 | orchestrator | 2026-03-11 01:09:37 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:40.275841 | orchestrator | 2026-03-11 01:09:40 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:40.276194 | orchestrator | 2026-03-11 01:09:40 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:40.276388 | orchestrator | 2026-03-11 01:09:40 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:43.306476 | orchestrator | 2026-03-11 01:09:43 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:43.306856 | orchestrator | 2026-03-11 01:09:43 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:43.306889 | orchestrator | 2026-03-11 01:09:43 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:46.339097 | orchestrator | 2026-03-11 01:09:46 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:46.341630 | orchestrator | 2026-03-11 01:09:46 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:46.342351 | orchestrator | 2026-03-11 01:09:46 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:49.367943 | orchestrator | 2026-03-11 01:09:49 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:49.368619 | orchestrator | 2026-03-11 01:09:49 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:49.368638 | orchestrator | 2026-03-11 01:09:49 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:52.417284 | orchestrator | 2026-03-11 01:09:52 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:52.418718 | orchestrator | 2026-03-11 01:09:52 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:52.418760 | orchestrator | 2026-03-11 01:09:52 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:55.458598 | orchestrator | 2026-03-11 01:09:55 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:55.459473 | orchestrator | 2026-03-11 01:09:55 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:55.459536 | orchestrator | 2026-03-11 01:09:55 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:09:58.492392 | orchestrator | 2026-03-11 01:09:58 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:09:58.492865 | orchestrator | 2026-03-11 01:09:58 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:09:58.492895 | orchestrator | 2026-03-11 01:09:58 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:10:01.538911 | orchestrator | 2026-03-11 01:10:01 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:01.645425 | orchestrator | 2026-03-11 01:12:01 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:12:01.645496 | orchestrator | 2026-03-11 01:12:01 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:04.673234 | orchestrator | 2026-03-11 01:12:04 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:04.675251 | orchestrator | 2026-03-11 01:12:04 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:12:04.675302 | orchestrator | 2026-03-11 01:12:04 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:07.712637 | orchestrator | 2026-03-11 01:12:07 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:07.712727 | orchestrator | 2026-03-11 01:12:07 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:12:07.713476 | orchestrator | 2026-03-11 01:12:07 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:10.753570 | orchestrator | 2026-03-11 01:12:10 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:10.755353 | orchestrator | 2026-03-11 01:12:10 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:12:10.755739 | orchestrator | 2026-03-11 01:12:10 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:13.791974 | orchestrator | 2026-03-11 01:12:13 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:13.793471 | orchestrator | 2026-03-11 01:12:13 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:12:13.793509 | orchestrator | 2026-03-11 01:12:13 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:16.837392 | orchestrator | 2026-03-11 01:12:16 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:16.837618 | orchestrator | 2026-03-11 01:12:16 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:12:16.837636 | orchestrator | 2026-03-11 01:12:16 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:19.875054 | orchestrator | 2026-03-11 01:12:19 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:19.875115 | orchestrator | 2026-03-11 01:12:19 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:12:19.875427 | orchestrator | 2026-03-11 01:12:19 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:22.910333 | orchestrator | 2026-03-11 01:12:22 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:22.912599 | orchestrator | 2026-03-11 01:12:22 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:12:22.912645 | orchestrator | 2026-03-11 01:12:22 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:25.954235 | orchestrator | 2026-03-11 01:12:25 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:25.956243 | orchestrator | 2026-03-11 01:12:25 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:12:25.956294 | orchestrator | 2026-03-11 01:12:25 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:29.001902 | orchestrator | 2026-03-11 01:12:29 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:29.004199 | orchestrator | 2026-03-11 01:12:29 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:12:29.004261 | orchestrator | 2026-03-11 01:12:29 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:32.048501 | orchestrator | 2026-03-11 01:12:32 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:32.049921 | orchestrator | 2026-03-11 01:12:32 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:12:32.049997 | orchestrator | 2026-03-11 01:12:32 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:35.092404 | orchestrator | 2026-03-11 01:12:35 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:35.093102 | orchestrator | 2026-03-11 01:12:35 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state STARTED 2026-03-11 01:12:35.093161 | orchestrator | 2026-03-11 01:12:35 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:38.137324 | orchestrator | 2026-03-11 01:12:38 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:38.140598 | orchestrator | 2026-03-11 01:12:38 | INFO  | Task 831eb757-12c2-464b-90b9-6ba6e7b92644 is in state SUCCESS 2026-03-11 01:12:38.141992 | orchestrator | 2026-03-11 01:12:38.142104 | orchestrator | 2026-03-11 01:12:38.142114 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:12:38.142121 | orchestrator | 2026-03-11 01:12:38.142126 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:12:38.142132 | orchestrator | Wednesday 11 March 2026 01:06:38 +0000 (0:00:00.163) 0:00:00.163 ******* 2026-03-11 01:12:38.142137 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:38.142143 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:12:38.142148 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:12:38.142152 | orchestrator | 2026-03-11 01:12:38.142162 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:12:38.142167 | orchestrator | Wednesday 11 March 2026 01:06:39 +0000 (0:00:00.279) 0:00:00.442 ******* 2026-03-11 01:12:38.142172 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2026-03-11 01:12:38.142177 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2026-03-11 01:12:38.142181 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2026-03-11 01:12:38.142186 | orchestrator | 2026-03-11 01:12:38.142191 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2026-03-11 01:12:38.142196 | orchestrator | 2026-03-11 01:12:38.142254 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2026-03-11 01:12:38.142262 | orchestrator | Wednesday 11 March 2026 01:06:39 +0000 (0:00:00.529) 0:00:00.972 ******* 2026-03-11 01:12:38.142267 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:38.142273 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:12:38.142284 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:12:38.142289 | orchestrator | 2026-03-11 01:12:38.142318 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:12:38.142323 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:12:38.142338 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:12:38.142342 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:12:38.142347 | orchestrator | 2026-03-11 01:12:38.142351 | orchestrator | 2026-03-11 01:12:38.142356 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:12:38.142362 | orchestrator | Wednesday 11 March 2026 01:08:37 +0000 (0:01:57.650) 0:01:58.622 ******* 2026-03-11 01:12:38.142567 | orchestrator | =============================================================================== 2026-03-11 01:12:38.142581 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 117.65s 2026-03-11 01:12:38.142586 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.53s 2026-03-11 01:12:38.142591 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.28s 2026-03-11 01:12:38.142596 | orchestrator | 2026-03-11 01:12:38.142601 | orchestrator | 2026-03-11 01:12:38.142606 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:12:38.142610 | orchestrator | 2026-03-11 01:12:38.142615 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2026-03-11 01:12:38.142620 | orchestrator | Wednesday 11 March 2026 01:04:36 +0000 (0:00:00.289) 0:00:00.289 ******* 2026-03-11 01:12:38.142624 | orchestrator | changed: [testbed-manager] 2026-03-11 01:12:38.142630 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.142635 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:38.142657 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:38.142662 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:38.142667 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:38.142671 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:38.142676 | orchestrator | 2026-03-11 01:12:38.142682 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:12:38.142686 | orchestrator | Wednesday 11 March 2026 01:04:37 +0000 (0:00:00.899) 0:00:01.188 ******* 2026-03-11 01:12:38.142691 | orchestrator | changed: [testbed-manager] 2026-03-11 01:12:38.142697 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.142702 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:38.142708 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:38.142714 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:38.142718 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:38.142723 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:38.142727 | orchestrator | 2026-03-11 01:12:38.142732 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:12:38.142736 | orchestrator | Wednesday 11 March 2026 01:04:37 +0000 (0:00:00.785) 0:00:01.974 ******* 2026-03-11 01:12:38.142741 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2026-03-11 01:12:38.142745 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2026-03-11 01:12:38.142750 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2026-03-11 01:12:38.142754 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2026-03-11 01:12:38.142759 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2026-03-11 01:12:38.142763 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2026-03-11 01:12:38.142768 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2026-03-11 01:12:38.142772 | orchestrator | 2026-03-11 01:12:38.142776 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2026-03-11 01:12:38.142781 | orchestrator | 2026-03-11 01:12:38.142785 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-11 01:12:38.142790 | orchestrator | Wednesday 11 March 2026 01:04:38 +0000 (0:00:00.937) 0:00:02.912 ******* 2026-03-11 01:12:38.142796 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:38.142802 | orchestrator | 2026-03-11 01:12:38.142806 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2026-03-11 01:12:38.142811 | orchestrator | Wednesday 11 March 2026 01:04:39 +0000 (0:00:00.614) 0:00:03.526 ******* 2026-03-11 01:12:38.142816 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2026-03-11 01:12:38.143072 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2026-03-11 01:12:38.143225 | orchestrator | 2026-03-11 01:12:38.143235 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2026-03-11 01:12:38.143238 | orchestrator | Wednesday 11 March 2026 01:04:43 +0000 (0:00:03.603) 0:00:07.129 ******* 2026-03-11 01:12:38.143241 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 01:12:38.143245 | orchestrator | changed: [testbed-node-0] => (item=None) 2026-03-11 01:12:38.143248 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.143251 | orchestrator | 2026-03-11 01:12:38.143254 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-11 01:12:38.143258 | orchestrator | Wednesday 11 March 2026 01:04:47 +0000 (0:00:03.915) 0:00:11.045 ******* 2026-03-11 01:12:38.143261 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.143264 | orchestrator | 2026-03-11 01:12:38.143267 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2026-03-11 01:12:38.143270 | orchestrator | Wednesday 11 March 2026 01:04:47 +0000 (0:00:00.569) 0:00:11.614 ******* 2026-03-11 01:12:38.143273 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.143276 | orchestrator | 2026-03-11 01:12:38.143279 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2026-03-11 01:12:38.143282 | orchestrator | Wednesday 11 March 2026 01:04:48 +0000 (0:00:01.215) 0:00:12.830 ******* 2026-03-11 01:12:38.143293 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.143296 | orchestrator | 2026-03-11 01:12:38.143299 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-11 01:12:38.143302 | orchestrator | Wednesday 11 March 2026 01:04:51 +0000 (0:00:02.837) 0:00:15.668 ******* 2026-03-11 01:12:38.143305 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.143309 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.143312 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.143315 | orchestrator | 2026-03-11 01:12:38.143343 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-11 01:12:38.143350 | orchestrator | Wednesday 11 March 2026 01:04:52 +0000 (0:00:00.436) 0:00:16.104 ******* 2026-03-11 01:12:38.143355 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:38.143360 | orchestrator | 2026-03-11 01:12:38.143365 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2026-03-11 01:12:38.143370 | orchestrator | Wednesday 11 March 2026 01:05:22 +0000 (0:00:30.811) 0:00:46.915 ******* 2026-03-11 01:12:38.143375 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.143379 | orchestrator | 2026-03-11 01:12:38.143384 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-11 01:12:38.143389 | orchestrator | Wednesday 11 March 2026 01:05:39 +0000 (0:00:16.322) 0:01:03.238 ******* 2026-03-11 01:12:38.143393 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:38.143398 | orchestrator | 2026-03-11 01:12:38.143403 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-11 01:12:38.143407 | orchestrator | Wednesday 11 March 2026 01:05:51 +0000 (0:00:12.351) 0:01:15.590 ******* 2026-03-11 01:12:38.143412 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:38.143417 | orchestrator | 2026-03-11 01:12:38.143423 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2026-03-11 01:12:38.143428 | orchestrator | Wednesday 11 March 2026 01:05:52 +0000 (0:00:01.101) 0:01:16.691 ******* 2026-03-11 01:12:38.143433 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.143479 | orchestrator | 2026-03-11 01:12:38.143488 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-11 01:12:38.143494 | orchestrator | Wednesday 11 March 2026 01:05:53 +0000 (0:00:00.519) 0:01:17.211 ******* 2026-03-11 01:12:38.143499 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:38.143504 | orchestrator | 2026-03-11 01:12:38.143509 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2026-03-11 01:12:38.143513 | orchestrator | Wednesday 11 March 2026 01:05:53 +0000 (0:00:00.470) 0:01:17.681 ******* 2026-03-11 01:12:38.143689 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:38.143693 | orchestrator | 2026-03-11 01:12:38.143696 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-11 01:12:38.143699 | orchestrator | Wednesday 11 March 2026 01:06:12 +0000 (0:00:18.390) 0:01:36.072 ******* 2026-03-11 01:12:38.143703 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.143706 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.143709 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.143712 | orchestrator | 2026-03-11 01:12:38.143715 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2026-03-11 01:12:38.143718 | orchestrator | 2026-03-11 01:12:38.143721 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2026-03-11 01:12:38.143724 | orchestrator | Wednesday 11 March 2026 01:06:12 +0000 (0:00:00.288) 0:01:36.360 ******* 2026-03-11 01:12:38.143727 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:38.143730 | orchestrator | 2026-03-11 01:12:38.143733 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2026-03-11 01:12:38.143736 | orchestrator | Wednesday 11 March 2026 01:06:12 +0000 (0:00:00.515) 0:01:36.876 ******* 2026-03-11 01:12:38.143740 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.143748 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.143751 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.143754 | orchestrator | 2026-03-11 01:12:38.143757 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2026-03-11 01:12:38.143761 | orchestrator | Wednesday 11 March 2026 01:06:14 +0000 (0:00:02.049) 0:01:38.926 ******* 2026-03-11 01:12:38.143764 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.143767 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.143770 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.143773 | orchestrator | 2026-03-11 01:12:38.143776 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-11 01:12:38.143779 | orchestrator | Wednesday 11 March 2026 01:06:17 +0000 (0:00:02.120) 0:01:41.047 ******* 2026-03-11 01:12:38.143782 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.143785 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.143819 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.143823 | orchestrator | 2026-03-11 01:12:38.143826 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-11 01:12:38.143830 | orchestrator | Wednesday 11 March 2026 01:06:17 +0000 (0:00:00.337) 0:01:41.385 ******* 2026-03-11 01:12:38.143833 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-11 01:12:38.143836 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.143839 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-11 01:12:38.143842 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.143845 | orchestrator | ok: [testbed-node-0] => (item=None) 2026-03-11 01:12:38.143848 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2026-03-11 01:12:38.143851 | orchestrator | 2026-03-11 01:12:38.143854 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2026-03-11 01:12:38.143857 | orchestrator | Wednesday 11 March 2026 01:06:25 +0000 (0:00:08.523) 0:01:49.908 ******* 2026-03-11 01:12:38.143862 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.143867 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.143873 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.143880 | orchestrator | 2026-03-11 01:12:38.143886 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2026-03-11 01:12:38.143890 | orchestrator | Wednesday 11 March 2026 01:06:26 +0000 (0:00:00.331) 0:01:50.239 ******* 2026-03-11 01:12:38.143895 | orchestrator | skipping: [testbed-node-0] => (item=None)  2026-03-11 01:12:38.143901 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.143905 | orchestrator | skipping: [testbed-node-1] => (item=None)  2026-03-11 01:12:38.143910 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.143914 | orchestrator | skipping: [testbed-node-2] => (item=None)  2026-03-11 01:12:38.143919 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.143923 | orchestrator | 2026-03-11 01:12:38.143928 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-11 01:12:38.143933 | orchestrator | Wednesday 11 March 2026 01:06:26 +0000 (0:00:00.608) 0:01:50.848 ******* 2026-03-11 01:12:38.143938 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.143941 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.143944 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.143947 | orchestrator | 2026-03-11 01:12:38.143951 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2026-03-11 01:12:38.143954 | orchestrator | Wednesday 11 March 2026 01:06:27 +0000 (0:00:00.708) 0:01:51.556 ******* 2026-03-11 01:12:38.143957 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.143960 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.143963 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.143966 | orchestrator | 2026-03-11 01:12:38.143969 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2026-03-11 01:12:38.143972 | orchestrator | Wednesday 11 March 2026 01:06:28 +0000 (0:00:00.925) 0:01:52.482 ******* 2026-03-11 01:12:38.143979 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.143982 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.143985 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.143988 | orchestrator | 2026-03-11 01:12:38.143991 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2026-03-11 01:12:38.143995 | orchestrator | Wednesday 11 March 2026 01:06:30 +0000 (0:00:02.151) 0:01:54.633 ******* 2026-03-11 01:12:38.143998 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.144001 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.144004 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:38.144007 | orchestrator | 2026-03-11 01:12:38.144010 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-11 01:12:38.144013 | orchestrator | Wednesday 11 March 2026 01:06:54 +0000 (0:00:23.580) 0:02:18.213 ******* 2026-03-11 01:12:38.144016 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.144019 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.144023 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:38.144026 | orchestrator | 2026-03-11 01:12:38.144029 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-11 01:12:38.144032 | orchestrator | Wednesday 11 March 2026 01:07:07 +0000 (0:00:12.972) 0:02:31.186 ******* 2026-03-11 01:12:38.144037 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:38.144044 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.144051 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.144056 | orchestrator | 2026-03-11 01:12:38.144061 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2026-03-11 01:12:38.144066 | orchestrator | Wednesday 11 March 2026 01:07:07 +0000 (0:00:00.816) 0:02:32.003 ******* 2026-03-11 01:12:38.144071 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.144075 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.144080 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.144084 | orchestrator | 2026-03-11 01:12:38.144089 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2026-03-11 01:12:38.144095 | orchestrator | Wednesday 11 March 2026 01:07:21 +0000 (0:00:14.009) 0:02:46.013 ******* 2026-03-11 01:12:38.144099 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.144114 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.144117 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.144120 | orchestrator | 2026-03-11 01:12:38.144123 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2026-03-11 01:12:38.144127 | orchestrator | Wednesday 11 March 2026 01:07:22 +0000 (0:00:00.978) 0:02:46.992 ******* 2026-03-11 01:12:38.144130 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.144133 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.144136 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.144139 | orchestrator | 2026-03-11 01:12:38.144142 | orchestrator | PLAY [Apply role nova] ********************************************************* 2026-03-11 01:12:38.144145 | orchestrator | 2026-03-11 01:12:38.144148 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-11 01:12:38.144152 | orchestrator | Wednesday 11 March 2026 01:07:23 +0000 (0:00:00.542) 0:02:47.534 ******* 2026-03-11 01:12:38.144155 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:38.144159 | orchestrator | 2026-03-11 01:12:38.144199 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2026-03-11 01:12:38.144203 | orchestrator | Wednesday 11 March 2026 01:07:24 +0000 (0:00:00.516) 0:02:48.050 ******* 2026-03-11 01:12:38.144206 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2026-03-11 01:12:38.144210 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2026-03-11 01:12:38.144213 | orchestrator | 2026-03-11 01:12:38.144216 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2026-03-11 01:12:38.144219 | orchestrator | Wednesday 11 March 2026 01:07:27 +0000 (0:00:03.235) 0:02:51.286 ******* 2026-03-11 01:12:38.144226 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2026-03-11 01:12:38.144230 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2026-03-11 01:12:38.144233 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2026-03-11 01:12:38.144237 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2026-03-11 01:12:38.144240 | orchestrator | 2026-03-11 01:12:38.144243 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2026-03-11 01:12:38.144247 | orchestrator | Wednesday 11 March 2026 01:07:33 +0000 (0:00:06.353) 0:02:57.640 ******* 2026-03-11 01:12:38.144250 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:12:38.144253 | orchestrator | 2026-03-11 01:12:38.144256 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2026-03-11 01:12:38.144259 | orchestrator | Wednesday 11 March 2026 01:07:36 +0000 (0:00:03.220) 0:03:00.861 ******* 2026-03-11 01:12:38.144262 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2026-03-11 01:12:38.144265 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:12:38.144268 | orchestrator | 2026-03-11 01:12:38.144271 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2026-03-11 01:12:38.144274 | orchestrator | Wednesday 11 March 2026 01:07:41 +0000 (0:00:04.807) 0:03:05.669 ******* 2026-03-11 01:12:38.144277 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:12:38.144280 | orchestrator | 2026-03-11 01:12:38.144283 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2026-03-11 01:12:38.144287 | orchestrator | Wednesday 11 March 2026 01:07:44 +0000 (0:00:03.040) 0:03:08.709 ******* 2026-03-11 01:12:38.144290 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2026-03-11 01:12:38.144293 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2026-03-11 01:12:38.144296 | orchestrator | 2026-03-11 01:12:38.144299 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2026-03-11 01:12:38.144302 | orchestrator | Wednesday 11 March 2026 01:07:51 +0000 (0:00:06.919) 0:03:15.628 ******* 2026-03-11 01:12:38.144309 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.144356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.144369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.144373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.144377 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.144381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.144384 | orchestrator | 2026-03-11 01:12:38.144387 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2026-03-11 01:12:38.144390 | orchestrator | Wednesday 11 March 2026 01:07:52 +0000 (0:00:01.200) 0:03:16.829 ******* 2026-03-11 01:12:38.144396 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.144399 | orchestrator | 2026-03-11 01:12:38.144402 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2026-03-11 01:12:38.144405 | orchestrator | Wednesday 11 March 2026 01:07:52 +0000 (0:00:00.123) 0:03:16.952 ******* 2026-03-11 01:12:38.144408 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.144411 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.144415 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.144418 | orchestrator | 2026-03-11 01:12:38.144421 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2026-03-11 01:12:38.144424 | orchestrator | Wednesday 11 March 2026 01:07:53 +0000 (0:00:00.416) 0:03:17.369 ******* 2026-03-11 01:12:38.144439 | orchestrator | ok: [testbed-node-0 -> localhost] 2026-03-11 01:12:38.144445 | orchestrator | 2026-03-11 01:12:38.144450 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2026-03-11 01:12:38.144455 | orchestrator | Wednesday 11 March 2026 01:07:53 +0000 (0:00:00.661) 0:03:18.030 ******* 2026-03-11 01:12:38.144460 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.144465 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.144470 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.144475 | orchestrator | 2026-03-11 01:12:38.144479 | orchestrator | TASK [nova : include_tasks] **************************************************** 2026-03-11 01:12:38.144491 | orchestrator | Wednesday 11 March 2026 01:07:54 +0000 (0:00:00.272) 0:03:18.303 ******* 2026-03-11 01:12:38.144500 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:38.144505 | orchestrator | 2026-03-11 01:12:38.144510 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-11 01:12:38.144515 | orchestrator | Wednesday 11 March 2026 01:07:54 +0000 (0:00:00.465) 0:03:18.769 ******* 2026-03-11 01:12:38.144522 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.144527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.144561 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.144570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.144576 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.144581 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.144586 | orchestrator | 2026-03-11 01:12:38.144591 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-11 01:12:38.144596 | orchestrator | Wednesday 11 March 2026 01:07:57 +0000 (0:00:02.324) 0:03:21.094 ******* 2026-03-11 01:12:38.144601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:38.144611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.144616 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.144637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:38.144644 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.144663 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.144670 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:38.144685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.144691 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.144695 | orchestrator | 2026-03-11 01:12:38.144700 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-11 01:12:38.144704 | orchestrator | Wednesday 11 March 2026 01:07:57 +0000 (0:00:00.520) 0:03:21.614 ******* 2026-03-11 01:12:38.144727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:38.144733 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.144738 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.144744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:38.144753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.144758 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.144777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:38.144783 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.144788 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.144793 | orchestrator | 2026-03-11 01:12:38.144798 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2026-03-11 01:12:38.144802 | orchestrator | Wednesday 11 March 2026 01:07:58 +0000 (0:00:00.696) 0:03:22.311 ******* 2026-03-11 01:12:38.144808 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.144817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.144839 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.144845 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.144851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.144859 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.144864 | orchestrator | 2026-03-11 01:12:38.144869 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2026-03-11 01:12:38.144874 | orchestrator | Wednesday 11 March 2026 01:08:00 +0000 (0:00:02.178) 0:03:24.489 ******* 2026-03-11 01:12:38.144894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.144901 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.144908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.144918 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.144924 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.144943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.144950 | orchestrator | 2026-03-11 01:12:38.144954 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2026-03-11 01:12:38.144959 | orchestrator | Wednesday 11 March 2026 01:08:05 +0000 (0:00:05.151) 0:03:29.641 ******* 2026-03-11 01:12:38.144965 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:38.144977 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.144981 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.144985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:38.144989 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.144993 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.145010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2026-03-11 01:12:38.145014 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.145021 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.145024 | orchestrator | 2026-03-11 01:12:38.145028 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2026-03-11 01:12:38.145031 | orchestrator | Wednesday 11 March 2026 01:08:06 +0000 (0:00:00.589) 0:03:30.230 ******* 2026-03-11 01:12:38.145035 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.145039 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:38.145042 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:38.145046 | orchestrator | 2026-03-11 01:12:38.145049 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2026-03-11 01:12:38.145053 | orchestrator | Wednesday 11 March 2026 01:08:07 +0000 (0:00:01.586) 0:03:31.817 ******* 2026-03-11 01:12:38.145056 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.145060 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.145063 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.145067 | orchestrator | 2026-03-11 01:12:38.145070 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2026-03-11 01:12:38.145074 | orchestrator | Wednesday 11 March 2026 01:08:08 +0000 (0:00:00.349) 0:03:32.166 ******* 2026-03-11 01:12:38.145078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.145094 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.145101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2026-03-11 01:12:38.145105 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145109 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145115 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145121 | orchestrator | 2026-03-11 01:12:38.145126 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-11 01:12:38.145131 | orchestrator | Wednesday 11 March 2026 01:08:10 +0000 (0:00:02.357) 0:03:34.524 ******* 2026-03-11 01:12:38.145138 | orchestrator | 2026-03-11 01:12:38.145144 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-11 01:12:38.145167 | orchestrator | Wednesday 11 March 2026 01:08:10 +0000 (0:00:00.131) 0:03:34.656 ******* 2026-03-11 01:12:38.145173 | orchestrator | 2026-03-11 01:12:38.145178 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2026-03-11 01:12:38.145182 | orchestrator | Wednesday 11 March 2026 01:08:10 +0000 (0:00:00.125) 0:03:34.781 ******* 2026-03-11 01:12:38.145187 | orchestrator | 2026-03-11 01:12:38.145197 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2026-03-11 01:12:38.145201 | orchestrator | Wednesday 11 March 2026 01:08:10 +0000 (0:00:00.125) 0:03:34.906 ******* 2026-03-11 01:12:38.145206 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.145210 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:38.145215 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:38.145220 | orchestrator | 2026-03-11 01:12:38.145224 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2026-03-11 01:12:38.145229 | orchestrator | Wednesday 11 March 2026 01:08:30 +0000 (0:00:19.585) 0:03:54.492 ******* 2026-03-11 01:12:38.145235 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.145240 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:38.145244 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:38.145249 | orchestrator | 2026-03-11 01:12:38.145255 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2026-03-11 01:12:38.145260 | orchestrator | 2026-03-11 01:12:38.145265 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-11 01:12:38.145268 | orchestrator | Wednesday 11 March 2026 01:08:40 +0000 (0:00:10.250) 0:04:04.742 ******* 2026-03-11 01:12:38.145272 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:38.145276 | orchestrator | 2026-03-11 01:12:38.145279 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-11 01:12:38.145282 | orchestrator | Wednesday 11 March 2026 01:08:41 +0000 (0:00:01.153) 0:04:05.896 ******* 2026-03-11 01:12:38.145285 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.145288 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.145291 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.145294 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.145297 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.145300 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.145303 | orchestrator | 2026-03-11 01:12:38.145306 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2026-03-11 01:12:38.145309 | orchestrator | Wednesday 11 March 2026 01:08:42 +0000 (0:00:00.563) 0:04:06.460 ******* 2026-03-11 01:12:38.145312 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.145315 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.145319 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.145322 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:12:38.145402 | orchestrator | 2026-03-11 01:12:38.145412 | orchestrator | TASK [module-load : Load modules] ********************************************** 2026-03-11 01:12:38.145416 | orchestrator | Wednesday 11 March 2026 01:08:43 +0000 (0:00:00.980) 0:04:07.440 ******* 2026-03-11 01:12:38.145421 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2026-03-11 01:12:38.145425 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2026-03-11 01:12:38.145430 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2026-03-11 01:12:38.145435 | orchestrator | 2026-03-11 01:12:38.145440 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2026-03-11 01:12:38.145445 | orchestrator | Wednesday 11 March 2026 01:08:44 +0000 (0:00:00.613) 0:04:08.054 ******* 2026-03-11 01:12:38.145449 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2026-03-11 01:12:38.145454 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2026-03-11 01:12:38.145459 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2026-03-11 01:12:38.145464 | orchestrator | 2026-03-11 01:12:38.145468 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2026-03-11 01:12:38.145472 | orchestrator | Wednesday 11 March 2026 01:08:45 +0000 (0:00:01.217) 0:04:09.271 ******* 2026-03-11 01:12:38.145477 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2026-03-11 01:12:38.145481 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.145492 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2026-03-11 01:12:38.145496 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.145502 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2026-03-11 01:12:38.145507 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.145512 | orchestrator | 2026-03-11 01:12:38.145517 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2026-03-11 01:12:38.145522 | orchestrator | Wednesday 11 March 2026 01:08:45 +0000 (0:00:00.554) 0:04:09.825 ******* 2026-03-11 01:12:38.145527 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 01:12:38.145530 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 01:12:38.145533 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.145537 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 01:12:38.145540 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 01:12:38.145543 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-11 01:12:38.145547 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.145552 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-11 01:12:38.145556 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2026-03-11 01:12:38.145563 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2026-03-11 01:12:38.145569 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.145574 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2026-03-11 01:12:38.145605 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-11 01:12:38.145612 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-11 01:12:38.145618 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2026-03-11 01:12:38.145622 | orchestrator | 2026-03-11 01:12:38.145628 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2026-03-11 01:12:38.145633 | orchestrator | Wednesday 11 March 2026 01:08:48 +0000 (0:00:02.266) 0:04:12.091 ******* 2026-03-11 01:12:38.145639 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.145644 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.145649 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.145654 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:38.145659 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:38.145665 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:38.145668 | orchestrator | 2026-03-11 01:12:38.145671 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2026-03-11 01:12:38.145675 | orchestrator | Wednesday 11 March 2026 01:08:49 +0000 (0:00:01.332) 0:04:13.424 ******* 2026-03-11 01:12:38.145678 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.145681 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.145684 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.145687 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:38.145690 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:38.145693 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:38.145696 | orchestrator | 2026-03-11 01:12:38.145701 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2026-03-11 01:12:38.145709 | orchestrator | Wednesday 11 March 2026 01:08:51 +0000 (0:00:01.745) 0:04:15.169 ******* 2026-03-11 01:12:38.145716 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145734 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145758 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145767 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145774 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145786 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145791 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145796 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145801 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145823 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145830 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145833 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145840 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145846 | orchestrator | 2026-03-11 01:12:38.145849 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-11 01:12:38.145853 | orchestrator | Wednesday 11 March 2026 01:08:53 +0000 (0:00:01.967) 0:04:17.137 ******* 2026-03-11 01:12:38.145856 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:12:38.145860 | orchestrator | 2026-03-11 01:12:38.145863 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2026-03-11 01:12:38.145866 | orchestrator | Wednesday 11 March 2026 01:08:54 +0000 (0:00:01.159) 0:04:18.296 ******* 2026-03-11 01:12:38.145880 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145884 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145892 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145895 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145899 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145902 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145916 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145920 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145930 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145949 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145953 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145959 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.145962 | orchestrator | 2026-03-11 01:12:38.145965 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2026-03-11 01:12:38.145968 | orchestrator | Wednesday 11 March 2026 01:08:57 +0000 (0:00:03.229) 0:04:21.525 ******* 2026-03-11 01:12:38.145971 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:38.145975 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:38.145978 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.145982 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.145994 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:38.146001 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:38.146004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.146007 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.146029 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:38.146033 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:38.146048 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.146054 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.146057 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:38.146061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.146064 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.146067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:38.146071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.146074 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.146077 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:38.146080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.146086 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.146089 | orchestrator | 2026-03-11 01:12:38.146092 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2026-03-11 01:12:38.146104 | orchestrator | Wednesday 11 March 2026 01:08:59 +0000 (0:00:01.580) 0:04:23.105 ******* 2026-03-11 01:12:38.146108 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:38.146111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:38.146114 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.146118 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:38.146121 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.146124 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:38.146140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.146144 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.146147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:38.146151 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:38.146154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.146157 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.146160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:38.146164 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.146170 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.146182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:38.146186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.146189 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.146193 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:38.146196 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.146199 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.146202 | orchestrator | 2026-03-11 01:12:38.146205 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-11 01:12:38.146209 | orchestrator | Wednesday 11 March 2026 01:09:01 +0000 (0:00:02.209) 0:04:25.315 ******* 2026-03-11 01:12:38.146212 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.146215 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.146218 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.146221 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:12:38.146224 | orchestrator | 2026-03-11 01:12:38.146227 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2026-03-11 01:12:38.146230 | orchestrator | Wednesday 11 March 2026 01:09:02 +0000 (0:00:01.006) 0:04:26.321 ******* 2026-03-11 01:12:38.146233 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-11 01:12:38.146238 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-11 01:12:38.146242 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-11 01:12:38.146245 | orchestrator | 2026-03-11 01:12:38.146248 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2026-03-11 01:12:38.146251 | orchestrator | Wednesday 11 March 2026 01:09:03 +0000 (0:00:01.050) 0:04:27.372 ******* 2026-03-11 01:12:38.146254 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-11 01:12:38.146257 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-11 01:12:38.146260 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-11 01:12:38.146263 | orchestrator | 2026-03-11 01:12:38.146266 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2026-03-11 01:12:38.146269 | orchestrator | Wednesday 11 March 2026 01:09:04 +0000 (0:00:00.991) 0:04:28.363 ******* 2026-03-11 01:12:38.146272 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:12:38.146276 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:12:38.146279 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:12:38.146282 | orchestrator | 2026-03-11 01:12:38.146285 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2026-03-11 01:12:38.146288 | orchestrator | Wednesday 11 March 2026 01:09:04 +0000 (0:00:00.529) 0:04:28.893 ******* 2026-03-11 01:12:38.146291 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:12:38.146294 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:12:38.146297 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:12:38.146300 | orchestrator | 2026-03-11 01:12:38.146303 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2026-03-11 01:12:38.146306 | orchestrator | Wednesday 11 March 2026 01:09:05 +0000 (0:00:00.752) 0:04:29.645 ******* 2026-03-11 01:12:38.146309 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-11 01:12:38.146312 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-11 01:12:38.146343 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-11 01:12:38.146348 | orchestrator | 2026-03-11 01:12:38.146351 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2026-03-11 01:12:38.146354 | orchestrator | Wednesday 11 March 2026 01:09:06 +0000 (0:00:01.196) 0:04:30.842 ******* 2026-03-11 01:12:38.146357 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-11 01:12:38.146360 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-11 01:12:38.146363 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-11 01:12:38.146366 | orchestrator | 2026-03-11 01:12:38.146369 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2026-03-11 01:12:38.146372 | orchestrator | Wednesday 11 March 2026 01:09:07 +0000 (0:00:01.129) 0:04:31.972 ******* 2026-03-11 01:12:38.146375 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2026-03-11 01:12:38.146378 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2026-03-11 01:12:38.146381 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2026-03-11 01:12:38.146385 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2026-03-11 01:12:38.146388 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2026-03-11 01:12:38.146391 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2026-03-11 01:12:38.146394 | orchestrator | 2026-03-11 01:12:38.146397 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2026-03-11 01:12:38.146400 | orchestrator | Wednesday 11 March 2026 01:09:11 +0000 (0:00:03.810) 0:04:35.783 ******* 2026-03-11 01:12:38.146403 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.146406 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.146409 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.146412 | orchestrator | 2026-03-11 01:12:38.146415 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2026-03-11 01:12:38.146418 | orchestrator | Wednesday 11 March 2026 01:09:12 +0000 (0:00:00.459) 0:04:36.242 ******* 2026-03-11 01:12:38.146421 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.146424 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.146430 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.146433 | orchestrator | 2026-03-11 01:12:38.146437 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2026-03-11 01:12:38.146440 | orchestrator | Wednesday 11 March 2026 01:09:12 +0000 (0:00:00.307) 0:04:36.550 ******* 2026-03-11 01:12:38.146443 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:38.146446 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:38.146449 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:38.146452 | orchestrator | 2026-03-11 01:12:38.146455 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2026-03-11 01:12:38.146458 | orchestrator | Wednesday 11 March 2026 01:09:13 +0000 (0:00:01.197) 0:04:37.747 ******* 2026-03-11 01:12:38.146461 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-11 01:12:38.146465 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-11 01:12:38.146468 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2026-03-11 01:12:38.146471 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-11 01:12:38.146474 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-11 01:12:38.146477 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2026-03-11 01:12:38.146480 | orchestrator | 2026-03-11 01:12:38.146483 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2026-03-11 01:12:38.146486 | orchestrator | Wednesday 11 March 2026 01:09:16 +0000 (0:00:03.124) 0:04:40.872 ******* 2026-03-11 01:12:38.146489 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 01:12:38.146492 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 01:12:38.146495 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 01:12:38.146498 | orchestrator | changed: [testbed-node-3] => (item=None) 2026-03-11 01:12:38.146502 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:38.146508 | orchestrator | changed: [testbed-node-4] => (item=None) 2026-03-11 01:12:38.146512 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:38.146516 | orchestrator | changed: [testbed-node-5] => (item=None) 2026-03-11 01:12:38.146524 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:38.146530 | orchestrator | 2026-03-11 01:12:38.146535 | orchestrator | TASK [nova-cell : Include tasks from qemu_wrapper.yml] ************************* 2026-03-11 01:12:38.146540 | orchestrator | Wednesday 11 March 2026 01:09:20 +0000 (0:00:03.211) 0:04:44.084 ******* 2026-03-11 01:12:38.146545 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.146550 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.146554 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.146558 | orchestrator | included: /ansible/roles/nova-cell/tasks/qemu_wrapper.yml for testbed-node-3, testbed-node-4, testbed-node-5 2026-03-11 01:12:38.146563 | orchestrator | 2026-03-11 01:12:38.146568 | orchestrator | TASK [nova-cell : Check qemu wrapper file] ************************************* 2026-03-11 01:12:38.146573 | orchestrator | Wednesday 11 March 2026 01:09:21 +0000 (0:00:01.592) 0:04:45.676 ******* 2026-03-11 01:12:38.146578 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-11 01:12:38.146583 | orchestrator | ok: [testbed-node-4 -> localhost] 2026-03-11 01:12:38.146587 | orchestrator | ok: [testbed-node-5 -> localhost] 2026-03-11 01:12:38.146593 | orchestrator | 2026-03-11 01:12:38.146614 | orchestrator | TASK [nova-cell : Copy qemu wrapper] ******************************************* 2026-03-11 01:12:38.146618 | orchestrator | Wednesday 11 March 2026 01:09:22 +0000 (0:00:01.108) 0:04:46.785 ******* 2026-03-11 01:12:38.146625 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.146628 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.146631 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.146634 | orchestrator | 2026-03-11 01:12:38.146637 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2026-03-11 01:12:38.146640 | orchestrator | Wednesday 11 March 2026 01:09:23 +0000 (0:00:00.334) 0:04:47.119 ******* 2026-03-11 01:12:38.146643 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.146646 | orchestrator | 2026-03-11 01:12:38.146649 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2026-03-11 01:12:38.146653 | orchestrator | Wednesday 11 March 2026 01:09:23 +0000 (0:00:00.136) 0:04:47.255 ******* 2026-03-11 01:12:38.146656 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.146659 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.146662 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.146665 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.146668 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.146672 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.146677 | orchestrator | 2026-03-11 01:12:38.146683 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2026-03-11 01:12:38.146690 | orchestrator | Wednesday 11 March 2026 01:09:23 +0000 (0:00:00.533) 0:04:47.789 ******* 2026-03-11 01:12:38.146696 | orchestrator | ok: [testbed-node-3 -> localhost] 2026-03-11 01:12:38.146700 | orchestrator | 2026-03-11 01:12:38.146705 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2026-03-11 01:12:38.146709 | orchestrator | Wednesday 11 March 2026 01:09:24 +0000 (0:00:00.912) 0:04:48.702 ******* 2026-03-11 01:12:38.146715 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.146720 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.146725 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.146731 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.146736 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.146741 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.146746 | orchestrator | 2026-03-11 01:12:38.146750 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2026-03-11 01:12:38.146753 | orchestrator | Wednesday 11 March 2026 01:09:25 +0000 (0:00:00.590) 0:04:49.293 ******* 2026-03-11 01:12:38.146757 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146761 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146771 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146782 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146788 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146794 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146802 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146808 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146816 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146824 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146833 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146841 | orchestrator | 2026-03-11 01:12:38.146846 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2026-03-11 01:12:38.146851 | orchestrator | Wednesday 11 March 2026 01:09:29 +0000 (0:00:04.266) 0:04:53.559 ******* 2026-03-11 01:12:38.146860 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:38.146866 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:38.146871 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:38.146877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:38.146882 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:38.146892 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:38.146898 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146905 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146908 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146920 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146932 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.146946 | orchestrator | 2026-03-11 01:12:38.146951 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2026-03-11 01:12:38.146956 | orchestrator | Wednesday 11 March 2026 01:09:35 +0000 (0:00:06.038) 0:04:59.598 ******* 2026-03-11 01:12:38.146961 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.146965 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.146972 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.146976 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.146981 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.146985 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.146997 | orchestrator | 2026-03-11 01:12:38.147001 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2026-03-11 01:12:38.147006 | orchestrator | Wednesday 11 March 2026 01:09:37 +0000 (0:00:01.777) 0:05:01.375 ******* 2026-03-11 01:12:38.147011 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-11 01:12:38.147016 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-11 01:12:38.147021 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2026-03-11 01:12:38.147027 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-11 01:12:38.147031 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-11 01:12:38.147037 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-11 01:12:38.147043 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-11 01:12:38.147047 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.147050 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.147053 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2026-03-11 01:12:38.147056 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.147060 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2026-03-11 01:12:38.147066 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-11 01:12:38.147070 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-11 01:12:38.147076 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2026-03-11 01:12:38.147080 | orchestrator | 2026-03-11 01:12:38.147085 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2026-03-11 01:12:38.147091 | orchestrator | Wednesday 11 March 2026 01:09:41 +0000 (0:00:03.718) 0:05:05.094 ******* 2026-03-11 01:12:38.147095 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.147101 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.147106 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.147111 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.147117 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.147121 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.147127 | orchestrator | 2026-03-11 01:12:38.147131 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2026-03-11 01:12:38.147134 | orchestrator | Wednesday 11 March 2026 01:09:41 +0000 (0:00:00.496) 0:05:05.590 ******* 2026-03-11 01:12:38.147140 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-11 01:12:38.147144 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-11 01:12:38.147147 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2026-03-11 01:12:38.147150 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-11 01:12:38.147153 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-11 01:12:38.147156 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-11 01:12:38.147159 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2026-03-11 01:12:38.147162 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-11 01:12:38.147166 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2026-03-11 01:12:38.147172 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-11 01:12:38.147175 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.147178 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-11 01:12:38.147181 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.147184 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2026-03-11 01:12:38.147187 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.147190 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-11 01:12:38.147193 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-11 01:12:38.147196 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2026-03-11 01:12:38.147199 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-11 01:12:38.147202 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-11 01:12:38.147205 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2026-03-11 01:12:38.147208 | orchestrator | 2026-03-11 01:12:38.147211 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2026-03-11 01:12:38.147215 | orchestrator | Wednesday 11 March 2026 01:09:46 +0000 (0:00:04.894) 0:05:10.485 ******* 2026-03-11 01:12:38.147218 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-11 01:12:38.147221 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-11 01:12:38.147224 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2026-03-11 01:12:38.147227 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-11 01:12:38.147230 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-11 01:12:38.147233 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-11 01:12:38.147236 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-11 01:12:38.147239 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2026-03-11 01:12:38.147242 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2026-03-11 01:12:38.147245 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-11 01:12:38.147248 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-11 01:12:38.147251 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2026-03-11 01:12:38.147254 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-11 01:12:38.147257 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.147260 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-11 01:12:38.147263 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-11 01:12:38.147266 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.147269 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2026-03-11 01:12:38.147272 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.147275 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-11 01:12:38.147284 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2026-03-11 01:12:38.147287 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-11 01:12:38.147290 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-11 01:12:38.147293 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2026-03-11 01:12:38.147296 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-11 01:12:38.147300 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-11 01:12:38.147303 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2026-03-11 01:12:38.147306 | orchestrator | 2026-03-11 01:12:38.147309 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2026-03-11 01:12:38.147312 | orchestrator | Wednesday 11 March 2026 01:09:52 +0000 (0:00:05.997) 0:05:16.482 ******* 2026-03-11 01:12:38.147315 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.147318 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.147321 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.147324 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.147355 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.147358 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.147361 | orchestrator | 2026-03-11 01:12:38.147364 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2026-03-11 01:12:38.147367 | orchestrator | Wednesday 11 March 2026 01:09:53 +0000 (0:00:00.803) 0:05:17.286 ******* 2026-03-11 01:12:38.147370 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.147373 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.147376 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.147380 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.147383 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.147386 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.147389 | orchestrator | 2026-03-11 01:12:38.147392 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2026-03-11 01:12:38.147395 | orchestrator | Wednesday 11 March 2026 01:09:53 +0000 (0:00:00.576) 0:05:17.863 ******* 2026-03-11 01:12:38.147398 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.147401 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.147404 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.147407 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:38.147410 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:38.147413 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:38.147416 | orchestrator | 2026-03-11 01:12:38.147419 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2026-03-11 01:12:38.147423 | orchestrator | Wednesday 11 March 2026 01:09:56 +0000 (0:00:02.305) 0:05:20.168 ******* 2026-03-11 01:12:38.147426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:38.147429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:38.147505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.147520 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.147526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:38.147529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:38.147533 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.147536 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.147539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2026-03-11 01:12:38.147545 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2026-03-11 01:12:38.147552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.147556 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.147561 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:38.147564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.147567 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.147570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:38.147574 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.147579 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.147582 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2026-03-11 01:12:38.147588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2026-03-11 01:12:38.147591 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.147594 | orchestrator | 2026-03-11 01:12:38.147597 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2026-03-11 01:12:38.147601 | orchestrator | Wednesday 11 March 2026 01:09:57 +0000 (0:00:01.587) 0:05:21.755 ******* 2026-03-11 01:12:38.147604 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-11 01:12:38.147607 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-11 01:12:38.147610 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.147613 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-11 01:12:38.147616 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-11 01:12:38.147619 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.147622 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-11 01:12:38.147625 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-11 01:12:38.147630 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.147634 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-11 01:12:38.147637 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-11 01:12:38.147640 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.147643 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-11 01:12:38.147646 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-11 01:12:38.147649 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.147652 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-11 01:12:38.147655 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-11 01:12:38.147658 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.147661 | orchestrator | 2026-03-11 01:12:38.147664 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2026-03-11 01:12:38.147667 | orchestrator | Wednesday 11 March 2026 01:09:58 +0000 (0:00:00.947) 0:05:22.703 ******* 2026-03-11 01:12:38.147671 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147681 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147689 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147710 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147718 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147723 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147741 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147749 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147761 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', '', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147767 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2026-03-11 01:12:38.147771 | orchestrator | 2026-03-11 01:12:38.147776 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2026-03-11 01:12:38.147781 | orchestrator | Wednesday 11 March 2026 01:10:01 +0000 (0:00:02.702) 0:05:25.405 ******* 2026-03-11 01:12:38.147785 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.147790 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.147795 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.147801 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.147806 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.147811 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.147816 | orchestrator | 2026-03-11 01:12:38.147821 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-11 01:12:38.147827 | orchestrator | Wednesday 11 March 2026 01:10:02 +0000 (0:00:00.812) 0:05:26.218 ******* 2026-03-11 01:12:38.147832 | orchestrator | 2026-03-11 01:12:38.147837 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-11 01:12:38.147845 | orchestrator | Wednesday 11 March 2026 01:10:02 +0000 (0:00:00.129) 0:05:26.347 ******* 2026-03-11 01:12:38.147850 | orchestrator | 2026-03-11 01:12:38.147855 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-11 01:12:38.147861 | orchestrator | Wednesday 11 March 2026 01:10:02 +0000 (0:00:00.128) 0:05:26.475 ******* 2026-03-11 01:12:38.147866 | orchestrator | 2026-03-11 01:12:38.147872 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-11 01:12:38.147877 | orchestrator | Wednesday 11 March 2026 01:10:02 +0000 (0:00:00.131) 0:05:26.607 ******* 2026-03-11 01:12:38.147882 | orchestrator | 2026-03-11 01:12:38.147887 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-11 01:12:38.147892 | orchestrator | Wednesday 11 March 2026 01:10:02 +0000 (0:00:00.294) 0:05:26.901 ******* 2026-03-11 01:12:38.147901 | orchestrator | 2026-03-11 01:12:38.147906 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2026-03-11 01:12:38.147912 | orchestrator | Wednesday 11 March 2026 01:10:02 +0000 (0:00:00.127) 0:05:27.028 ******* 2026-03-11 01:12:38.147917 | orchestrator | 2026-03-11 01:12:38.147925 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2026-03-11 01:12:38.147931 | orchestrator | Wednesday 11 March 2026 01:10:03 +0000 (0:00:00.158) 0:05:27.187 ******* 2026-03-11 01:12:38.147936 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.147941 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:38.147946 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:38.147951 | orchestrator | 2026-03-11 01:12:38.147956 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2026-03-11 01:12:38.147962 | orchestrator | Wednesday 11 March 2026 01:10:10 +0000 (0:00:07.323) 0:05:34.511 ******* 2026-03-11 01:12:38.147967 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.147972 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:38.147977 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:38.147982 | orchestrator | 2026-03-11 01:12:38.147988 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2026-03-11 01:12:38.147993 | orchestrator | Wednesday 11 March 2026 01:10:22 +0000 (0:00:11.712) 0:05:46.223 ******* 2026-03-11 01:12:38.147998 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:38.148003 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:38.148008 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:38.148013 | orchestrator | 2026-03-11 01:12:38.148018 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2026-03-11 01:12:38.148024 | orchestrator | Wednesday 11 March 2026 01:10:42 +0000 (0:00:20.079) 0:06:06.302 ******* 2026-03-11 01:12:38.148029 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:38.148034 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:38.148039 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:38.148044 | orchestrator | 2026-03-11 01:12:38.148050 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2026-03-11 01:12:38.148055 | orchestrator | Wednesday 11 March 2026 01:11:11 +0000 (0:00:28.966) 0:06:35.269 ******* 2026-03-11 01:12:38.148060 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:38.148065 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:38.148070 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:38.148076 | orchestrator | 2026-03-11 01:12:38.148081 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2026-03-11 01:12:38.148086 | orchestrator | Wednesday 11 March 2026 01:11:11 +0000 (0:00:00.677) 0:06:35.947 ******* 2026-03-11 01:12:38.148091 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:38.148096 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:38.148102 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:38.148107 | orchestrator | 2026-03-11 01:12:38.148113 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2026-03-11 01:12:38.148118 | orchestrator | Wednesday 11 March 2026 01:11:12 +0000 (0:00:00.671) 0:06:36.618 ******* 2026-03-11 01:12:38.148123 | orchestrator | changed: [testbed-node-3] 2026-03-11 01:12:38.148129 | orchestrator | changed: [testbed-node-4] 2026-03-11 01:12:38.148134 | orchestrator | changed: [testbed-node-5] 2026-03-11 01:12:38.148140 | orchestrator | 2026-03-11 01:12:38.148145 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2026-03-11 01:12:38.148151 | orchestrator | Wednesday 11 March 2026 01:11:31 +0000 (0:00:18.592) 0:06:55.211 ******* 2026-03-11 01:12:38.148156 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.148161 | orchestrator | 2026-03-11 01:12:38.148166 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2026-03-11 01:12:38.148171 | orchestrator | Wednesday 11 March 2026 01:11:31 +0000 (0:00:00.110) 0:06:55.321 ******* 2026-03-11 01:12:38.148176 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.148184 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.148189 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.148194 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.148199 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.148204 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2026-03-11 01:12:38.148210 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 01:12:38.148215 | orchestrator | 2026-03-11 01:12:38.148220 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2026-03-11 01:12:38.148225 | orchestrator | Wednesday 11 March 2026 01:11:51 +0000 (0:00:20.517) 0:07:15.838 ******* 2026-03-11 01:12:38.148230 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.148235 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.148240 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.148245 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.148248 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.148251 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.148255 | orchestrator | 2026-03-11 01:12:38.148258 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2026-03-11 01:12:38.148261 | orchestrator | Wednesday 11 March 2026 01:12:00 +0000 (0:00:08.571) 0:07:24.409 ******* 2026-03-11 01:12:38.148264 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.148267 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.148270 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.148276 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.148279 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.148282 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2026-03-11 01:12:38.148285 | orchestrator | 2026-03-11 01:12:38.148288 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2026-03-11 01:12:38.148291 | orchestrator | Wednesday 11 March 2026 01:12:03 +0000 (0:00:03.448) 0:07:27.858 ******* 2026-03-11 01:12:38.148294 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 01:12:38.148297 | orchestrator | 2026-03-11 01:12:38.148300 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2026-03-11 01:12:38.148304 | orchestrator | Wednesday 11 March 2026 01:12:17 +0000 (0:00:13.529) 0:07:41.387 ******* 2026-03-11 01:12:38.148307 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 01:12:38.148311 | orchestrator | 2026-03-11 01:12:38.148316 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2026-03-11 01:12:38.148341 | orchestrator | Wednesday 11 March 2026 01:12:18 +0000 (0:00:01.260) 0:07:42.647 ******* 2026-03-11 01:12:38.148347 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.148351 | orchestrator | 2026-03-11 01:12:38.148356 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2026-03-11 01:12:38.148360 | orchestrator | Wednesday 11 March 2026 01:12:19 +0000 (0:00:01.206) 0:07:43.854 ******* 2026-03-11 01:12:38.148364 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2026-03-11 01:12:38.148369 | orchestrator | 2026-03-11 01:12:38.148374 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2026-03-11 01:12:38.148378 | orchestrator | Wednesday 11 March 2026 01:12:29 +0000 (0:00:10.144) 0:07:53.998 ******* 2026-03-11 01:12:38.148383 | orchestrator | ok: [testbed-node-3] 2026-03-11 01:12:38.148388 | orchestrator | ok: [testbed-node-4] 2026-03-11 01:12:38.148392 | orchestrator | ok: [testbed-node-5] 2026-03-11 01:12:38.148396 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:12:38.148400 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:12:38.148405 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:12:38.148409 | orchestrator | 2026-03-11 01:12:38.148414 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2026-03-11 01:12:38.148418 | orchestrator | 2026-03-11 01:12:38.148423 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2026-03-11 01:12:38.148431 | orchestrator | Wednesday 11 March 2026 01:12:31 +0000 (0:00:01.934) 0:07:55.933 ******* 2026-03-11 01:12:38.148436 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:12:38.148441 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:12:38.148445 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:12:38.148450 | orchestrator | 2026-03-11 01:12:38.148455 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2026-03-11 01:12:38.148460 | orchestrator | 2026-03-11 01:12:38.148465 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2026-03-11 01:12:38.148470 | orchestrator | Wednesday 11 March 2026 01:12:33 +0000 (0:00:01.381) 0:07:57.315 ******* 2026-03-11 01:12:38.148475 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.148480 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.148485 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.148489 | orchestrator | 2026-03-11 01:12:38.148492 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2026-03-11 01:12:38.148495 | orchestrator | 2026-03-11 01:12:38.148498 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2026-03-11 01:12:38.148501 | orchestrator | Wednesday 11 March 2026 01:12:33 +0000 (0:00:00.522) 0:07:57.837 ******* 2026-03-11 01:12:38.148504 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2026-03-11 01:12:38.148507 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2026-03-11 01:12:38.148510 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2026-03-11 01:12:38.148513 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2026-03-11 01:12:38.148516 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2026-03-11 01:12:38.148520 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2026-03-11 01:12:38.148523 | orchestrator | skipping: [testbed-node-3] 2026-03-11 01:12:38.148526 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2026-03-11 01:12:38.148529 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2026-03-11 01:12:38.148532 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2026-03-11 01:12:38.148535 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2026-03-11 01:12:38.148538 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2026-03-11 01:12:38.148541 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2026-03-11 01:12:38.148544 | orchestrator | skipping: [testbed-node-4] 2026-03-11 01:12:38.148547 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2026-03-11 01:12:38.148550 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2026-03-11 01:12:38.148553 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2026-03-11 01:12:38.148556 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2026-03-11 01:12:38.148559 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2026-03-11 01:12:38.148562 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2026-03-11 01:12:38.148565 | orchestrator | skipping: [testbed-node-5] 2026-03-11 01:12:38.148569 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2026-03-11 01:12:38.148574 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2026-03-11 01:12:38.148579 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2026-03-11 01:12:38.148583 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2026-03-11 01:12:38.148588 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2026-03-11 01:12:38.148593 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2026-03-11 01:12:38.148597 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.148606 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2026-03-11 01:12:38.148612 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2026-03-11 01:12:38.148617 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2026-03-11 01:12:38.148626 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2026-03-11 01:12:38.148631 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2026-03-11 01:12:38.148636 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2026-03-11 01:12:38.148641 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.148647 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2026-03-11 01:12:38.148651 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2026-03-11 01:12:38.148656 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2026-03-11 01:12:38.148661 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2026-03-11 01:12:38.148667 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2026-03-11 01:12:38.148675 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2026-03-11 01:12:38.148681 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.148686 | orchestrator | 2026-03-11 01:12:38.148691 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2026-03-11 01:12:38.148696 | orchestrator | 2026-03-11 01:12:38.148702 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2026-03-11 01:12:38.148707 | orchestrator | Wednesday 11 March 2026 01:12:35 +0000 (0:00:01.322) 0:07:59.160 ******* 2026-03-11 01:12:38.148712 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2026-03-11 01:12:38.148717 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2026-03-11 01:12:38.148722 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.148727 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2026-03-11 01:12:38.148732 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2026-03-11 01:12:38.148737 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.148742 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2026-03-11 01:12:38.148747 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2026-03-11 01:12:38.148753 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.148758 | orchestrator | 2026-03-11 01:12:38.148763 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2026-03-11 01:12:38.148769 | orchestrator | 2026-03-11 01:12:38.148774 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2026-03-11 01:12:38.148779 | orchestrator | Wednesday 11 March 2026 01:12:35 +0000 (0:00:00.716) 0:07:59.876 ******* 2026-03-11 01:12:38.148784 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.148789 | orchestrator | 2026-03-11 01:12:38.148794 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2026-03-11 01:12:38.148799 | orchestrator | 2026-03-11 01:12:38.148804 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2026-03-11 01:12:38.148809 | orchestrator | Wednesday 11 March 2026 01:12:36 +0000 (0:00:00.643) 0:08:00.519 ******* 2026-03-11 01:12:38.148814 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:12:38.148820 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:12:38.148826 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:12:38.148831 | orchestrator | 2026-03-11 01:12:38.148837 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:12:38.148842 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:12:38.148848 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2026-03-11 01:12:38.148854 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-11 01:12:38.148859 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=52  rescued=0 ignored=0 2026-03-11 01:12:38.148868 | orchestrator | testbed-node-3 : ok=45  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2026-03-11 01:12:38.148873 | orchestrator | testbed-node-4 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-11 01:12:38.148879 | orchestrator | testbed-node-5 : ok=39  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2026-03-11 01:12:38.148884 | orchestrator | 2026-03-11 01:12:38.148890 | orchestrator | 2026-03-11 01:12:38.148894 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:12:38.148897 | orchestrator | Wednesday 11 March 2026 01:12:37 +0000 (0:00:00.561) 0:08:01.081 ******* 2026-03-11 01:12:38.148901 | orchestrator | =============================================================================== 2026-03-11 01:12:38.148904 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 30.81s 2026-03-11 01:12:38.148907 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 28.97s 2026-03-11 01:12:38.148910 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 23.58s 2026-03-11 01:12:38.148913 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 20.52s 2026-03-11 01:12:38.148916 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 20.08s 2026-03-11 01:12:38.148923 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 19.59s 2026-03-11 01:12:38.148926 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 18.59s 2026-03-11 01:12:38.148929 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 18.39s 2026-03-11 01:12:38.148932 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 16.32s 2026-03-11 01:12:38.148935 | orchestrator | nova-cell : Create cell ------------------------------------------------ 14.01s 2026-03-11 01:12:38.148938 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 13.53s 2026-03-11 01:12:38.148941 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.97s 2026-03-11 01:12:38.148944 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.35s 2026-03-11 01:12:38.148947 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 11.71s 2026-03-11 01:12:38.148956 | orchestrator | nova : Restart nova-api container -------------------------------------- 10.25s 2026-03-11 01:12:38.148960 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 10.14s 2026-03-11 01:12:38.148963 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.57s 2026-03-11 01:12:38.148966 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.52s 2026-03-11 01:12:38.148969 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.32s 2026-03-11 01:12:38.148972 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 6.92s 2026-03-11 01:12:38.148975 | orchestrator | 2026-03-11 01:12:38 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:41.186135 | orchestrator | 2026-03-11 01:12:41 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:41.186191 | orchestrator | 2026-03-11 01:12:41 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:44.229952 | orchestrator | 2026-03-11 01:12:44 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:44.230974 | orchestrator | 2026-03-11 01:12:44 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:47.276859 | orchestrator | 2026-03-11 01:12:47 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:47.276921 | orchestrator | 2026-03-11 01:12:47 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:50.332645 | orchestrator | 2026-03-11 01:12:50 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:50.332721 | orchestrator | 2026-03-11 01:12:50 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:53.377666 | orchestrator | 2026-03-11 01:12:53 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:53.377778 | orchestrator | 2026-03-11 01:12:53 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:56.417400 | orchestrator | 2026-03-11 01:12:56 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:56.417448 | orchestrator | 2026-03-11 01:12:56 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:12:59.450807 | orchestrator | 2026-03-11 01:12:59 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:12:59.450880 | orchestrator | 2026-03-11 01:12:59 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:13:02.499826 | orchestrator | 2026-03-11 01:13:02 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:13:02.499891 | orchestrator | 2026-03-11 01:13:02 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:13:05.544412 | orchestrator | 2026-03-11 01:13:05 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:13:05.544467 | orchestrator | 2026-03-11 01:13:05 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:13:08.588147 | orchestrator | 2026-03-11 01:13:08 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:13:08.588212 | orchestrator | 2026-03-11 01:13:08 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:13:11.625357 | orchestrator | 2026-03-11 01:13:11 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state STARTED 2026-03-11 01:13:11.625435 | orchestrator | 2026-03-11 01:13:11 | INFO  | Wait 1 second(s) until the next check 2026-03-11 01:13:14.671813 | orchestrator | 2026-03-11 01:13:14 | INFO  | Task 9aa66614-4cf7-40ef-82b1-b27475200ab0 is in state SUCCESS 2026-03-11 01:13:14.671968 | orchestrator | 2026-03-11 01:13:14 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:14.674155 | orchestrator | 2026-03-11 01:13:14.674194 | orchestrator | 2026-03-11 01:13:14.674199 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2026-03-11 01:13:14.674203 | orchestrator | 2026-03-11 01:13:14.674206 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2026-03-11 01:13:14.674242 | orchestrator | Wednesday 11 March 2026 01:08:41 +0000 (0:00:00.247) 0:00:00.247 ******* 2026-03-11 01:13:14.674247 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:14.674251 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:13:14.674263 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:13:14.674268 | orchestrator | 2026-03-11 01:13:14.674271 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2026-03-11 01:13:14.674274 | orchestrator | Wednesday 11 March 2026 01:08:41 +0000 (0:00:00.295) 0:00:00.543 ******* 2026-03-11 01:13:14.674278 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2026-03-11 01:13:14.674281 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2026-03-11 01:13:14.674284 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2026-03-11 01:13:14.674287 | orchestrator | 2026-03-11 01:13:14.674290 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2026-03-11 01:13:14.674294 | orchestrator | 2026-03-11 01:13:14.674297 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-11 01:13:14.674300 | orchestrator | Wednesday 11 March 2026 01:08:42 +0000 (0:00:00.411) 0:00:00.955 ******* 2026-03-11 01:13:14.674310 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:13:14.674406 | orchestrator | 2026-03-11 01:13:14.674411 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2026-03-11 01:13:14.674414 | orchestrator | Wednesday 11 March 2026 01:08:42 +0000 (0:00:00.550) 0:00:01.506 ******* 2026-03-11 01:13:14.674417 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2026-03-11 01:13:14.674421 | orchestrator | 2026-03-11 01:13:14.674424 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2026-03-11 01:13:14.674427 | orchestrator | Wednesday 11 March 2026 01:08:46 +0000 (0:00:03.386) 0:00:04.892 ******* 2026-03-11 01:13:14.674430 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2026-03-11 01:13:14.674462 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2026-03-11 01:13:14.674466 | orchestrator | 2026-03-11 01:13:14.674469 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2026-03-11 01:13:14.674472 | orchestrator | Wednesday 11 March 2026 01:08:53 +0000 (0:00:07.198) 0:00:12.091 ******* 2026-03-11 01:13:14.674475 | orchestrator | ok: [testbed-node-0] => (item=service) 2026-03-11 01:13:14.674479 | orchestrator | 2026-03-11 01:13:14.674482 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2026-03-11 01:13:14.674485 | orchestrator | Wednesday 11 March 2026 01:08:56 +0000 (0:00:03.253) 0:00:15.345 ******* 2026-03-11 01:13:14.674491 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-11 01:13:14.674538 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2026-03-11 01:13:14.674543 | orchestrator | [WARNING]: Module did not set no_log for update_password 2026-03-11 01:13:14.674549 | orchestrator | 2026-03-11 01:13:14.674554 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2026-03-11 01:13:14.674559 | orchestrator | Wednesday 11 March 2026 01:09:04 +0000 (0:00:07.542) 0:00:22.888 ******* 2026-03-11 01:13:14.674562 | orchestrator | ok: [testbed-node-0] => (item=admin) 2026-03-11 01:13:14.674566 | orchestrator | 2026-03-11 01:13:14.674571 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2026-03-11 01:13:14.674575 | orchestrator | Wednesday 11 March 2026 01:09:07 +0000 (0:00:03.441) 0:00:26.330 ******* 2026-03-11 01:13:14.674584 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-11 01:13:14.674590 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2026-03-11 01:13:14.674595 | orchestrator | 2026-03-11 01:13:14.674632 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2026-03-11 01:13:14.674639 | orchestrator | Wednesday 11 March 2026 01:09:14 +0000 (0:00:07.179) 0:00:33.509 ******* 2026-03-11 01:13:14.674644 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2026-03-11 01:13:14.674649 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2026-03-11 01:13:14.674654 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2026-03-11 01:13:14.674659 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2026-03-11 01:13:14.674664 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2026-03-11 01:13:14.674669 | orchestrator | 2026-03-11 01:13:14.674698 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-11 01:13:14.674703 | orchestrator | Wednesday 11 March 2026 01:09:31 +0000 (0:00:16.803) 0:00:50.313 ******* 2026-03-11 01:13:14.674706 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:13:14.674710 | orchestrator | 2026-03-11 01:13:14.674713 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2026-03-11 01:13:14.674720 | orchestrator | Wednesday 11 March 2026 01:09:32 +0000 (0:00:00.732) 0:00:51.045 ******* 2026-03-11 01:13:14.674724 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.674727 | orchestrator | 2026-03-11 01:13:14.674730 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2026-03-11 01:13:14.674738 | orchestrator | Wednesday 11 March 2026 01:09:38 +0000 (0:00:05.954) 0:00:57.000 ******* 2026-03-11 01:13:14.674741 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.674744 | orchestrator | 2026-03-11 01:13:14.674748 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-11 01:13:14.674757 | orchestrator | Wednesday 11 March 2026 01:09:43 +0000 (0:00:05.218) 0:01:02.218 ******* 2026-03-11 01:13:14.674761 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:14.674764 | orchestrator | 2026-03-11 01:13:14.674767 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2026-03-11 01:13:14.674770 | orchestrator | Wednesday 11 March 2026 01:09:46 +0000 (0:00:03.198) 0:01:05.417 ******* 2026-03-11 01:13:14.674774 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-11 01:13:14.674777 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-11 01:13:14.674780 | orchestrator | 2026-03-11 01:13:14.674783 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2026-03-11 01:13:14.674786 | orchestrator | Wednesday 11 March 2026 01:09:57 +0000 (0:00:10.586) 0:01:16.003 ******* 2026-03-11 01:13:14.674789 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2026-03-11 01:13:14.674792 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2026-03-11 01:13:14.674796 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2026-03-11 01:13:14.674803 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2026-03-11 01:13:14.674807 | orchestrator | 2026-03-11 01:13:14.674810 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2026-03-11 01:13:14.674813 | orchestrator | Wednesday 11 March 2026 01:10:13 +0000 (0:00:16.530) 0:01:32.534 ******* 2026-03-11 01:13:14.674816 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.674819 | orchestrator | 2026-03-11 01:13:14.674822 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2026-03-11 01:13:14.674825 | orchestrator | Wednesday 11 March 2026 01:10:18 +0000 (0:00:04.371) 0:01:36.906 ******* 2026-03-11 01:13:14.674828 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.674831 | orchestrator | 2026-03-11 01:13:14.674834 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2026-03-11 01:13:14.674837 | orchestrator | Wednesday 11 March 2026 01:10:23 +0000 (0:00:05.012) 0:01:41.919 ******* 2026-03-11 01:13:14.674840 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:14.674843 | orchestrator | 2026-03-11 01:13:14.674846 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2026-03-11 01:13:14.674849 | orchestrator | Wednesday 11 March 2026 01:10:23 +0000 (0:00:00.183) 0:01:42.102 ******* 2026-03-11 01:13:14.674853 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:14.674856 | orchestrator | 2026-03-11 01:13:14.674859 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-11 01:13:14.674862 | orchestrator | Wednesday 11 March 2026 01:10:27 +0000 (0:00:04.158) 0:01:46.261 ******* 2026-03-11 01:13:14.674865 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:13:14.674868 | orchestrator | 2026-03-11 01:13:14.674871 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2026-03-11 01:13:14.675081 | orchestrator | Wednesday 11 March 2026 01:10:28 +0000 (0:00:01.173) 0:01:47.435 ******* 2026-03-11 01:13:14.675085 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:14.675088 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.675091 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:14.675094 | orchestrator | 2026-03-11 01:13:14.675101 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2026-03-11 01:13:14.675104 | orchestrator | Wednesday 11 March 2026 01:10:33 +0000 (0:00:04.574) 0:01:52.010 ******* 2026-03-11 01:13:14.675107 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:14.675110 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:14.675113 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.675117 | orchestrator | 2026-03-11 01:13:14.675120 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2026-03-11 01:13:14.675123 | orchestrator | Wednesday 11 March 2026 01:10:37 +0000 (0:00:04.196) 0:01:56.206 ******* 2026-03-11 01:13:14.675126 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.675129 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:14.675132 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:14.675135 | orchestrator | 2026-03-11 01:13:14.675138 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2026-03-11 01:13:14.675141 | orchestrator | Wednesday 11 March 2026 01:10:38 +0000 (0:00:00.686) 0:01:56.893 ******* 2026-03-11 01:13:14.675144 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:13:14.675148 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:14.675151 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:13:14.675154 | orchestrator | 2026-03-11 01:13:14.675157 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2026-03-11 01:13:14.675160 | orchestrator | Wednesday 11 March 2026 01:10:39 +0000 (0:00:01.574) 0:01:58.467 ******* 2026-03-11 01:13:14.675163 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:14.675166 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:14.675169 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.675173 | orchestrator | 2026-03-11 01:13:14.675176 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2026-03-11 01:13:14.675179 | orchestrator | Wednesday 11 March 2026 01:10:40 +0000 (0:00:01.056) 0:01:59.523 ******* 2026-03-11 01:13:14.675182 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.675185 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:14.675188 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:14.675191 | orchestrator | 2026-03-11 01:13:14.675194 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2026-03-11 01:13:14.675197 | orchestrator | Wednesday 11 March 2026 01:10:41 +0000 (0:00:01.020) 0:02:00.544 ******* 2026-03-11 01:13:14.675200 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:14.675203 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:14.675206 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.675209 | orchestrator | 2026-03-11 01:13:14.675223 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2026-03-11 01:13:14.675226 | orchestrator | Wednesday 11 March 2026 01:10:43 +0000 (0:00:01.658) 0:02:02.202 ******* 2026-03-11 01:13:14.675229 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.675232 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:14.675235 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:14.675238 | orchestrator | 2026-03-11 01:13:14.675242 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2026-03-11 01:13:14.675245 | orchestrator | Wednesday 11 March 2026 01:10:45 +0000 (0:00:01.579) 0:02:03.782 ******* 2026-03-11 01:13:14.675248 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:14.675251 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:13:14.675269 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:13:14.675273 | orchestrator | 2026-03-11 01:13:14.675276 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2026-03-11 01:13:14.675279 | orchestrator | Wednesday 11 March 2026 01:10:45 +0000 (0:00:00.573) 0:02:04.355 ******* 2026-03-11 01:13:14.675282 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:13:14.675285 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:13:14.675288 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:14.675291 | orchestrator | 2026-03-11 01:13:14.675294 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-11 01:13:14.675300 | orchestrator | Wednesday 11 March 2026 01:10:48 +0000 (0:00:02.477) 0:02:06.832 ******* 2026-03-11 01:13:14.675306 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:13:14.675309 | orchestrator | 2026-03-11 01:13:14.675312 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2026-03-11 01:13:14.675316 | orchestrator | Wednesday 11 March 2026 01:10:49 +0000 (0:00:00.774) 0:02:07.607 ******* 2026-03-11 01:13:14.675319 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:14.675322 | orchestrator | 2026-03-11 01:13:14.675325 | orchestrator | TASK [octavia : Get service project id] **************************************** 2026-03-11 01:13:14.675328 | orchestrator | Wednesday 11 March 2026 01:10:52 +0000 (0:00:03.031) 0:02:10.639 ******* 2026-03-11 01:13:14.675331 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:14.675334 | orchestrator | 2026-03-11 01:13:14.675337 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2026-03-11 01:13:14.675340 | orchestrator | Wednesday 11 March 2026 01:10:55 +0000 (0:00:02.947) 0:02:13.587 ******* 2026-03-11 01:13:14.675343 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2026-03-11 01:13:14.675346 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2026-03-11 01:13:14.675349 | orchestrator | 2026-03-11 01:13:14.675352 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2026-03-11 01:13:14.675355 | orchestrator | Wednesday 11 March 2026 01:11:01 +0000 (0:00:06.490) 0:02:20.077 ******* 2026-03-11 01:13:14.675358 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:14.675361 | orchestrator | 2026-03-11 01:13:14.675364 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2026-03-11 01:13:14.675367 | orchestrator | Wednesday 11 March 2026 01:11:04 +0000 (0:00:03.168) 0:02:23.246 ******* 2026-03-11 01:13:14.675371 | orchestrator | ok: [testbed-node-0] 2026-03-11 01:13:14.675374 | orchestrator | ok: [testbed-node-1] 2026-03-11 01:13:14.675377 | orchestrator | ok: [testbed-node-2] 2026-03-11 01:13:14.675380 | orchestrator | 2026-03-11 01:13:14.675383 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2026-03-11 01:13:14.675386 | orchestrator | Wednesday 11 March 2026 01:11:04 +0000 (0:00:00.307) 0:02:23.553 ******* 2026-03-11 01:13:14.675391 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.675406 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.675415 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.675419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.675422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.675426 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.675429 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675445 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675461 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675468 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675476 | orchestrator | 2026-03-11 01:13:14.675479 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2026-03-11 01:13:14.675483 | orchestrator | Wednesday 11 March 2026 01:11:07 +0000 (0:00:02.141) 0:02:25.695 ******* 2026-03-11 01:13:14.675486 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:14.675489 | orchestrator | 2026-03-11 01:13:14.675499 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2026-03-11 01:13:14.675503 | orchestrator | Wednesday 11 March 2026 01:11:07 +0000 (0:00:00.131) 0:02:25.826 ******* 2026-03-11 01:13:14.675507 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:14.675512 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:13:14.675517 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:13:14.675523 | orchestrator | 2026-03-11 01:13:14.675528 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2026-03-11 01:13:14.675532 | orchestrator | Wednesday 11 March 2026 01:11:07 +0000 (0:00:00.458) 0:02:26.285 ******* 2026-03-11 01:13:14.675540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:14.675546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:14.675552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:14.675558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:14.675566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675602 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:14.675605 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:14.675608 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:13:14.675611 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:14.675615 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:14.675629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:14.675632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675641 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:14.675644 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:13:14.675647 | orchestrator | 2026-03-11 01:13:14.675651 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-11 01:13:14.675654 | orchestrator | Wednesday 11 March 2026 01:11:08 +0000 (0:00:00.730) 0:02:27.015 ******* 2026-03-11 01:13:14.675657 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2026-03-11 01:13:14.675660 | orchestrator | 2026-03-11 01:13:14.675663 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2026-03-11 01:13:14.675666 | orchestrator | Wednesday 11 March 2026 01:11:08 +0000 (0:00:00.505) 0:02:27.521 ******* 2026-03-11 01:13:14.675670 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.675685 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.675691 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.675697 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.675702 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.675707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.675715 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675765 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675774 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.675780 | orchestrator | 2026-03-11 01:13:14.675786 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2026-03-11 01:13:14.675789 | orchestrator | Wednesday 11 March 2026 01:11:13 +0000 (0:00:04.681) 0:02:32.202 ******* 2026-03-11 01:13:14.675795 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:14.675799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:14.675803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675808 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:14.675816 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:14.675822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:14.675826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:14.675832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675842 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:14.675846 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:13:14.675849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:14.675853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:14.675859 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675869 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:14.675875 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:13:14.675878 | orchestrator | 2026-03-11 01:13:14.675882 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2026-03-11 01:13:14.675885 | orchestrator | Wednesday 11 March 2026 01:11:14 +0000 (0:00:01.059) 0:02:33.261 ******* 2026-03-11 01:13:14.675889 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:14.675893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:14.675897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675911 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:14.675915 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:13:14.675919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:14.675925 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:14.675929 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:14.675943 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:14.675948 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2026-03-11 01:13:14.675955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2026-03-11 01:13:14.675958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2026-03-11 01:13:14.675965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2026-03-11 01:13:14.675968 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:13:14.675971 | orchestrator | 2026-03-11 01:13:14.675981 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2026-03-11 01:13:14.675985 | orchestrator | Wednesday 11 March 2026 01:11:15 +0000 (0:00:01.170) 0:02:34.432 ******* 2026-03-11 01:13:14.675995 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.676001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.676008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.676011 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.676014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.676018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.676023 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676036 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676043 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676048 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676052 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676062 | orchestrator | 2026-03-11 01:13:14.676065 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2026-03-11 01:13:14.676068 | orchestrator | Wednesday 11 March 2026 01:11:20 +0000 (0:00:04.766) 0:02:39.198 ******* 2026-03-11 01:13:14.676071 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-11 01:13:14.676075 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-11 01:13:14.676078 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2026-03-11 01:13:14.676081 | orchestrator | 2026-03-11 01:13:14.676084 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2026-03-11 01:13:14.676087 | orchestrator | Wednesday 11 March 2026 01:11:22 +0000 (0:00:01.940) 0:02:41.138 ******* 2026-03-11 01:13:14.676090 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.676094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.676100 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.676108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.676111 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.676114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.676118 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676124 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676138 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676144 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676147 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676154 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676157 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676160 | orchestrator | 2026-03-11 01:13:14.676163 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2026-03-11 01:13:14.676166 | orchestrator | Wednesday 11 March 2026 01:11:39 +0000 (0:00:16.983) 0:02:58.122 ******* 2026-03-11 01:13:14.676172 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.676180 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:14.676185 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:14.676190 | orchestrator | 2026-03-11 01:13:14.676194 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2026-03-11 01:13:14.676198 | orchestrator | Wednesday 11 March 2026 01:11:40 +0000 (0:00:01.432) 0:02:59.554 ******* 2026-03-11 01:13:14.676203 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-11 01:13:14.676208 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-11 01:13:14.676216 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-11 01:13:14.676221 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-11 01:13:14.676227 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-11 01:13:14.676232 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-11 01:13:14.676237 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-11 01:13:14.676243 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-11 01:13:14.676248 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-11 01:13:14.676266 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-11 01:13:14.676271 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-11 01:13:14.676274 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-11 01:13:14.676277 | orchestrator | 2026-03-11 01:13:14.676280 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2026-03-11 01:13:14.676284 | orchestrator | Wednesday 11 March 2026 01:11:45 +0000 (0:00:04.695) 0:03:04.249 ******* 2026-03-11 01:13:14.676287 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-11 01:13:14.676291 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-11 01:13:14.676299 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-11 01:13:14.676304 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-11 01:13:14.676309 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-11 01:13:14.676314 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-11 01:13:14.676318 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-11 01:13:14.676322 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-11 01:13:14.676327 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-11 01:13:14.676332 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-11 01:13:14.676338 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-11 01:13:14.676343 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-11 01:13:14.676349 | orchestrator | 2026-03-11 01:13:14.676354 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2026-03-11 01:13:14.676359 | orchestrator | Wednesday 11 March 2026 01:11:50 +0000 (0:00:04.988) 0:03:09.238 ******* 2026-03-11 01:13:14.676365 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2026-03-11 01:13:14.676369 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2026-03-11 01:13:14.676372 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2026-03-11 01:13:14.676375 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2026-03-11 01:13:14.676378 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2026-03-11 01:13:14.676381 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2026-03-11 01:13:14.676385 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2026-03-11 01:13:14.676388 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2026-03-11 01:13:14.676391 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2026-03-11 01:13:14.676394 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2026-03-11 01:13:14.676400 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2026-03-11 01:13:14.676403 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2026-03-11 01:13:14.676406 | orchestrator | 2026-03-11 01:13:14.676409 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2026-03-11 01:13:14.676412 | orchestrator | Wednesday 11 March 2026 01:11:56 +0000 (0:00:05.352) 0:03:14.590 ******* 2026-03-11 01:13:14.676415 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.676422 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.676428 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2026-03-11 01:13:14.676432 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.676435 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.676441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2026-03-11 01:13:14.676444 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676449 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676470 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676479 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2026-03-11 01:13:14.676482 | orchestrator | 2026-03-11 01:13:14.676485 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2026-03-11 01:13:14.676489 | orchestrator | Wednesday 11 March 2026 01:12:00 +0000 (0:00:04.001) 0:03:18.592 ******* 2026-03-11 01:13:14.676492 | orchestrator | skipping: [testbed-node-0] 2026-03-11 01:13:14.676495 | orchestrator | skipping: [testbed-node-1] 2026-03-11 01:13:14.676498 | orchestrator | skipping: [testbed-node-2] 2026-03-11 01:13:14.676501 | orchestrator | 2026-03-11 01:13:14.676504 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2026-03-11 01:13:14.676507 | orchestrator | Wednesday 11 March 2026 01:12:00 +0000 (0:00:00.250) 0:03:18.842 ******* 2026-03-11 01:13:14.676510 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.676513 | orchestrator | 2026-03-11 01:13:14.676516 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2026-03-11 01:13:14.676519 | orchestrator | Wednesday 11 March 2026 01:12:02 +0000 (0:00:02.244) 0:03:21.086 ******* 2026-03-11 01:13:14.676522 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.676525 | orchestrator | 2026-03-11 01:13:14.676528 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2026-03-11 01:13:14.676533 | orchestrator | Wednesday 11 March 2026 01:12:04 +0000 (0:00:02.142) 0:03:23.229 ******* 2026-03-11 01:13:14.676536 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.676540 | orchestrator | 2026-03-11 01:13:14.676546 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2026-03-11 01:13:14.676550 | orchestrator | Wednesday 11 March 2026 01:12:07 +0000 (0:00:02.921) 0:03:26.151 ******* 2026-03-11 01:13:14.676558 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.676564 | orchestrator | 2026-03-11 01:13:14.676569 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2026-03-11 01:13:14.676574 | orchestrator | Wednesday 11 March 2026 01:12:10 +0000 (0:00:02.695) 0:03:28.846 ******* 2026-03-11 01:13:14.676579 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.676585 | orchestrator | 2026-03-11 01:13:14.676589 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-11 01:13:14.676593 | orchestrator | Wednesday 11 March 2026 01:12:27 +0000 (0:00:17.619) 0:03:46.466 ******* 2026-03-11 01:13:14.676596 | orchestrator | 2026-03-11 01:13:14.676599 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-11 01:13:14.676602 | orchestrator | Wednesday 11 March 2026 01:12:27 +0000 (0:00:00.068) 0:03:46.534 ******* 2026-03-11 01:13:14.676605 | orchestrator | 2026-03-11 01:13:14.676608 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2026-03-11 01:13:14.676611 | orchestrator | Wednesday 11 March 2026 01:12:28 +0000 (0:00:00.099) 0:03:46.634 ******* 2026-03-11 01:13:14.676614 | orchestrator | 2026-03-11 01:13:14.676618 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2026-03-11 01:13:14.676623 | orchestrator | Wednesday 11 March 2026 01:12:28 +0000 (0:00:00.070) 0:03:46.704 ******* 2026-03-11 01:13:14.676628 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.676634 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:14.676639 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:14.676643 | orchestrator | 2026-03-11 01:13:14.676648 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2026-03-11 01:13:14.676653 | orchestrator | Wednesday 11 March 2026 01:12:38 +0000 (0:00:10.442) 0:03:57.146 ******* 2026-03-11 01:13:14.676658 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.676662 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:14.676667 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:14.676672 | orchestrator | 2026-03-11 01:13:14.676677 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2026-03-11 01:13:14.676682 | orchestrator | Wednesday 11 March 2026 01:12:49 +0000 (0:00:10.539) 0:04:07.686 ******* 2026-03-11 01:13:14.676687 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:14.676692 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:14.676697 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.676702 | orchestrator | 2026-03-11 01:13:14.676707 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2026-03-11 01:13:14.676712 | orchestrator | Wednesday 11 March 2026 01:12:57 +0000 (0:00:08.628) 0:04:16.315 ******* 2026-03-11 01:13:14.676717 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.676723 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:14.676728 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:14.676734 | orchestrator | 2026-03-11 01:13:14.676739 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2026-03-11 01:13:14.676744 | orchestrator | Wednesday 11 March 2026 01:13:02 +0000 (0:00:04.972) 0:04:21.287 ******* 2026-03-11 01:13:14.676750 | orchestrator | changed: [testbed-node-0] 2026-03-11 01:13:14.676753 | orchestrator | changed: [testbed-node-1] 2026-03-11 01:13:14.676756 | orchestrator | changed: [testbed-node-2] 2026-03-11 01:13:14.676759 | orchestrator | 2026-03-11 01:13:14.676763 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:13:14.676766 | orchestrator | testbed-node-0 : ok=57  changed=38  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2026-03-11 01:13:14.676769 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 01:13:14.676773 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2026-03-11 01:13:14.676779 | orchestrator | 2026-03-11 01:13:14.676783 | orchestrator | 2026-03-11 01:13:14.676786 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:13:14.676789 | orchestrator | Wednesday 11 March 2026 01:13:12 +0000 (0:00:09.994) 0:04:31.281 ******* 2026-03-11 01:13:14.676795 | orchestrator | =============================================================================== 2026-03-11 01:13:14.676801 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 17.62s 2026-03-11 01:13:14.676806 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.98s 2026-03-11 01:13:14.676811 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.80s 2026-03-11 01:13:14.676817 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.53s 2026-03-11 01:13:14.676821 | orchestrator | octavia : Create security groups for octavia --------------------------- 10.59s 2026-03-11 01:13:14.676827 | orchestrator | octavia : Restart octavia-driver-agent container ----------------------- 10.54s 2026-03-11 01:13:14.676831 | orchestrator | octavia : Restart octavia-api container -------------------------------- 10.44s 2026-03-11 01:13:14.676836 | orchestrator | octavia : Restart octavia-worker container ------------------------------ 9.99s 2026-03-11 01:13:14.676841 | orchestrator | octavia : Restart octavia-health-manager container ---------------------- 8.63s 2026-03-11 01:13:14.676845 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 7.54s 2026-03-11 01:13:14.676850 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 7.20s 2026-03-11 01:13:14.676857 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.18s 2026-03-11 01:13:14.676863 | orchestrator | octavia : Get security groups for octavia ------------------------------- 6.49s 2026-03-11 01:13:14.676867 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 5.96s 2026-03-11 01:13:14.676872 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.36s 2026-03-11 01:13:14.676877 | orchestrator | octavia : Create nova keypair for amphora ------------------------------- 5.22s 2026-03-11 01:13:14.676882 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 5.01s 2026-03-11 01:13:14.676886 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 4.99s 2026-03-11 01:13:14.676891 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 4.97s 2026-03-11 01:13:14.676896 | orchestrator | octavia : Copying over config.json files for services ------------------- 4.77s 2026-03-11 01:13:17.714490 | orchestrator | 2026-03-11 01:13:17 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:20.752233 | orchestrator | 2026-03-11 01:13:20 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:23.783226 | orchestrator | 2026-03-11 01:13:23 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:26.822149 | orchestrator | 2026-03-11 01:13:26 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:29.867326 | orchestrator | 2026-03-11 01:13:29 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:32.908760 | orchestrator | 2026-03-11 01:13:32 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:35.950977 | orchestrator | 2026-03-11 01:13:35 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:38.986702 | orchestrator | 2026-03-11 01:13:38 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:42.029718 | orchestrator | 2026-03-11 01:13:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:45.067700 | orchestrator | 2026-03-11 01:13:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:48.107485 | orchestrator | 2026-03-11 01:13:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:51.154075 | orchestrator | 2026-03-11 01:13:51 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:54.196467 | orchestrator | 2026-03-11 01:13:54 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:13:57.239647 | orchestrator | 2026-03-11 01:13:57 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:14:00.279390 | orchestrator | 2026-03-11 01:14:00 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:14:03.319380 | orchestrator | 2026-03-11 01:14:03 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:14:06.372839 | orchestrator | 2026-03-11 01:14:06 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:14:09.411581 | orchestrator | 2026-03-11 01:14:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:14:12.455194 | orchestrator | 2026-03-11 01:14:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2026-03-11 01:14:15.485634 | orchestrator | 2026-03-11 01:14:15.814421 | orchestrator | 2026-03-11 01:14:15.819815 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Wed Mar 11 01:14:15 UTC 2026 2026-03-11 01:14:15.819873 | orchestrator | 2026-03-11 01:14:16.156101 | orchestrator | ok: Runtime: 0:33:45.710271 2026-03-11 01:14:16.566551 | 2026-03-11 01:14:16.566714 | TASK [Bootstrap services] 2026-03-11 01:14:17.391932 | orchestrator | 2026-03-11 01:14:17.392095 | orchestrator | # BOOTSTRAP 2026-03-11 01:14:17.392113 | orchestrator | 2026-03-11 01:14:17.392121 | orchestrator | + set -e 2026-03-11 01:14:17.392129 | orchestrator | + echo 2026-03-11 01:14:17.392176 | orchestrator | + echo '# BOOTSTRAP' 2026-03-11 01:14:17.392190 | orchestrator | + echo 2026-03-11 01:14:17.392218 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2026-03-11 01:14:17.398535 | orchestrator | + set -e 2026-03-11 01:14:17.398620 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2026-03-11 01:14:21.687489 | orchestrator | 2026-03-11 01:14:21 | INFO  | It takes a moment until task d47fce05-ae23-415c-8cd9-480bece15f42 (flavor-manager) has been started and output is visible here. 2026-03-11 01:14:29.491207 | orchestrator | 2026-03-11 01:14:24 | INFO  | Flavor SCS-1L-1 created 2026-03-11 01:14:29.491275 | orchestrator | 2026-03-11 01:14:24 | INFO  | Flavor SCS-1L-1-5 created 2026-03-11 01:14:29.491283 | orchestrator | 2026-03-11 01:14:24 | INFO  | Flavor SCS-1V-2 created 2026-03-11 01:14:29.491288 | orchestrator | 2026-03-11 01:14:25 | INFO  | Flavor SCS-1V-2-5 created 2026-03-11 01:14:29.491293 | orchestrator | 2026-03-11 01:14:25 | INFO  | Flavor SCS-1V-4 created 2026-03-11 01:14:29.491297 | orchestrator | 2026-03-11 01:14:25 | INFO  | Flavor SCS-1V-4-10 created 2026-03-11 01:14:29.491302 | orchestrator | 2026-03-11 01:14:25 | INFO  | Flavor SCS-1V-8 created 2026-03-11 01:14:29.491306 | orchestrator | 2026-03-11 01:14:25 | INFO  | Flavor SCS-1V-8-20 created 2026-03-11 01:14:29.491317 | orchestrator | 2026-03-11 01:14:25 | INFO  | Flavor SCS-2V-4 created 2026-03-11 01:14:29.491325 | orchestrator | 2026-03-11 01:14:26 | INFO  | Flavor SCS-2V-4-10 created 2026-03-11 01:14:29.491332 | orchestrator | 2026-03-11 01:14:26 | INFO  | Flavor SCS-2V-8 created 2026-03-11 01:14:29.491340 | orchestrator | 2026-03-11 01:14:26 | INFO  | Flavor SCS-2V-8-20 created 2026-03-11 01:14:29.491347 | orchestrator | 2026-03-11 01:14:26 | INFO  | Flavor SCS-2V-16 created 2026-03-11 01:14:29.491353 | orchestrator | 2026-03-11 01:14:26 | INFO  | Flavor SCS-2V-16-50 created 2026-03-11 01:14:29.491359 | orchestrator | 2026-03-11 01:14:26 | INFO  | Flavor SCS-4V-8 created 2026-03-11 01:14:29.491366 | orchestrator | 2026-03-11 01:14:26 | INFO  | Flavor SCS-4V-8-20 created 2026-03-11 01:14:29.491372 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-4V-16 created 2026-03-11 01:14:29.491379 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-4V-16-50 created 2026-03-11 01:14:29.491386 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-4V-32 created 2026-03-11 01:14:29.491393 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-4V-32-100 created 2026-03-11 01:14:29.491400 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-8V-16 created 2026-03-11 01:14:29.491407 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-8V-16-50 created 2026-03-11 01:14:29.491415 | orchestrator | 2026-03-11 01:14:27 | INFO  | Flavor SCS-8V-32 created 2026-03-11 01:14:29.491422 | orchestrator | 2026-03-11 01:14:28 | INFO  | Flavor SCS-8V-32-100 created 2026-03-11 01:14:29.491430 | orchestrator | 2026-03-11 01:14:28 | INFO  | Flavor SCS-16V-32 created 2026-03-11 01:14:29.491437 | orchestrator | 2026-03-11 01:14:28 | INFO  | Flavor SCS-16V-32-100 created 2026-03-11 01:14:29.491444 | orchestrator | 2026-03-11 01:14:28 | INFO  | Flavor SCS-2V-4-20s created 2026-03-11 01:14:29.491451 | orchestrator | 2026-03-11 01:14:28 | INFO  | Flavor SCS-4V-8-50s created 2026-03-11 01:14:29.491458 | orchestrator | 2026-03-11 01:14:29 | INFO  | Flavor SCS-4V-16-100s created 2026-03-11 01:14:29.491466 | orchestrator | 2026-03-11 01:14:29 | INFO  | Flavor SCS-8V-32-100s created 2026-03-11 01:14:31.804504 | orchestrator | 2026-03-11 01:14:31 | INFO  | Trying to run play bootstrap-basic in environment openstack 2026-03-11 01:14:41.827261 | orchestrator | 2026-03-11 01:14:41 | INFO  | Prepare task for execution of bootstrap-basic. 2026-03-11 01:14:41.898428 | orchestrator | 2026-03-11 01:14:41 | INFO  | Task a3421b2d-e3c4-4fc8-907c-3d99ff6da00b (bootstrap-basic) was prepared for execution. 2026-03-11 01:14:41.898500 | orchestrator | 2026-03-11 01:14:41 | INFO  | It takes a moment until task a3421b2d-e3c4-4fc8-907c-3d99ff6da00b (bootstrap-basic) has been started and output is visible here. 2026-03-11 01:15:28.465711 | orchestrator | 2026-03-11 01:15:28.465802 | orchestrator | PLAY [Bootstrap basic OpenStack services] ************************************** 2026-03-11 01:15:28.465810 | orchestrator | 2026-03-11 01:15:28.465815 | orchestrator | TASK [Gathering Facts] ********************************************************* 2026-03-11 01:15:28.465819 | orchestrator | Wednesday 11 March 2026 01:14:46 +0000 (0:00:00.059) 0:00:00.059 ******* 2026-03-11 01:15:28.465824 | orchestrator | ok: [localhost] 2026-03-11 01:15:28.465828 | orchestrator | 2026-03-11 01:15:28.465833 | orchestrator | TASK [Get volume type LUKS] **************************************************** 2026-03-11 01:15:28.465837 | orchestrator | Wednesday 11 March 2026 01:14:47 +0000 (0:00:01.722) 0:00:01.781 ******* 2026-03-11 01:15:28.465842 | orchestrator | ok: [localhost] 2026-03-11 01:15:28.465846 | orchestrator | 2026-03-11 01:15:28.465850 | orchestrator | TASK [Create volume type LUKS] ************************************************* 2026-03-11 01:15:28.465854 | orchestrator | Wednesday 11 March 2026 01:14:56 +0000 (0:00:09.184) 0:00:10.966 ******* 2026-03-11 01:15:28.465858 | orchestrator | changed: [localhost] 2026-03-11 01:15:28.465862 | orchestrator | 2026-03-11 01:15:28.465866 | orchestrator | TASK [Create public network] *************************************************** 2026-03-11 01:15:28.465870 | orchestrator | Wednesday 11 March 2026 01:15:05 +0000 (0:00:08.159) 0:00:19.126 ******* 2026-03-11 01:15:28.465874 | orchestrator | changed: [localhost] 2026-03-11 01:15:28.465878 | orchestrator | 2026-03-11 01:15:28.465885 | orchestrator | TASK [Set public network to default] ******************************************* 2026-03-11 01:15:28.465889 | orchestrator | Wednesday 11 March 2026 01:15:10 +0000 (0:00:05.326) 0:00:24.453 ******* 2026-03-11 01:15:28.465893 | orchestrator | changed: [localhost] 2026-03-11 01:15:28.465897 | orchestrator | 2026-03-11 01:15:28.465901 | orchestrator | TASK [Create public subnet] **************************************************** 2026-03-11 01:15:28.465905 | orchestrator | Wednesday 11 March 2026 01:15:16 +0000 (0:00:06.374) 0:00:30.827 ******* 2026-03-11 01:15:28.465908 | orchestrator | changed: [localhost] 2026-03-11 01:15:28.465912 | orchestrator | 2026-03-11 01:15:28.465916 | orchestrator | TASK [Create default IPv4 subnet pool] ***************************************** 2026-03-11 01:15:28.465919 | orchestrator | Wednesday 11 March 2026 01:15:20 +0000 (0:00:04.047) 0:00:34.875 ******* 2026-03-11 01:15:28.465923 | orchestrator | changed: [localhost] 2026-03-11 01:15:28.465927 | orchestrator | 2026-03-11 01:15:28.465931 | orchestrator | TASK [Create manager role] ***************************************************** 2026-03-11 01:15:28.465940 | orchestrator | Wednesday 11 March 2026 01:15:24 +0000 (0:00:03.813) 0:00:38.688 ******* 2026-03-11 01:15:28.465944 | orchestrator | ok: [localhost] 2026-03-11 01:15:28.465948 | orchestrator | 2026-03-11 01:15:28.465952 | orchestrator | PLAY RECAP ********************************************************************* 2026-03-11 01:15:28.465956 | orchestrator | localhost : ok=8  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2026-03-11 01:15:28.465961 | orchestrator | 2026-03-11 01:15:28.465964 | orchestrator | 2026-03-11 01:15:28.465969 | orchestrator | TASKS RECAP ******************************************************************** 2026-03-11 01:15:28.465975 | orchestrator | Wednesday 11 March 2026 01:15:28 +0000 (0:00:03.530) 0:00:42.219 ******* 2026-03-11 01:15:28.465981 | orchestrator | =============================================================================== 2026-03-11 01:15:28.465987 | orchestrator | Get volume type LUKS ---------------------------------------------------- 9.18s 2026-03-11 01:15:28.466150 | orchestrator | Create volume type LUKS ------------------------------------------------- 8.16s 2026-03-11 01:15:28.466162 | orchestrator | Set public network to default ------------------------------------------- 6.37s 2026-03-11 01:15:28.466168 | orchestrator | Create public network --------------------------------------------------- 5.33s 2026-03-11 01:15:28.466174 | orchestrator | Create public subnet ---------------------------------------------------- 4.05s 2026-03-11 01:15:28.466180 | orchestrator | Create default IPv4 subnet pool ----------------------------------------- 3.81s 2026-03-11 01:15:28.466187 | orchestrator | Create manager role ----------------------------------------------------- 3.53s 2026-03-11 01:15:28.466194 | orchestrator | Gathering Facts --------------------------------------------------------- 1.72s 2026-03-11 01:15:30.874403 | orchestrator | 2026-03-11 01:15:30 | INFO  | It takes a moment until task 8a8aba9f-e081-4cc2-94fd-308de653da2a (image-manager) has been started and output is visible here. 2026-03-11 01:16:16.092427 | orchestrator | 2026-03-11 01:15:33 | INFO  | Processing image 'Cirros 0.6.2' 2026-03-11 01:16:16.092513 | orchestrator | 2026-03-11 01:15:33 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img: 302 2026-03-11 01:16:16.092526 | orchestrator | 2026-03-11 01:15:33 | INFO  | Importing image Cirros 0.6.2 2026-03-11 01:16:16.092533 | orchestrator | 2026-03-11 01:15:33 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-11 01:16:16.092541 | orchestrator | 2026-03-11 01:15:36 | INFO  | Waiting for image to leave queued state... 2026-03-11 01:16:16.092547 | orchestrator | 2026-03-11 01:15:39 | INFO  | Waiting for import to complete... 2026-03-11 01:16:16.092551 | orchestrator | 2026-03-11 01:15:49 | INFO  | Import of 'Cirros 0.6.2' successfully completed, reloading images 2026-03-11 01:16:16.092556 | orchestrator | 2026-03-11 01:15:49 | INFO  | Checking parameters of 'Cirros 0.6.2' 2026-03-11 01:16:16.092560 | orchestrator | 2026-03-11 01:15:49 | INFO  | Setting internal_version = 0.6.2 2026-03-11 01:16:16.092564 | orchestrator | 2026-03-11 01:15:49 | INFO  | Setting image_original_user = cirros 2026-03-11 01:16:16.092568 | orchestrator | 2026-03-11 01:15:49 | INFO  | Adding tag os:cirros 2026-03-11 01:16:16.092572 | orchestrator | 2026-03-11 01:15:49 | INFO  | Setting property architecture: x86_64 2026-03-11 01:16:16.092576 | orchestrator | 2026-03-11 01:15:50 | INFO  | Setting property hw_disk_bus: scsi 2026-03-11 01:16:16.092580 | orchestrator | 2026-03-11 01:15:50 | INFO  | Setting property hw_rng_model: virtio 2026-03-11 01:16:16.092584 | orchestrator | 2026-03-11 01:15:50 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-11 01:16:16.092588 | orchestrator | 2026-03-11 01:15:50 | INFO  | Setting property hw_watchdog_action: reset 2026-03-11 01:16:16.092592 | orchestrator | 2026-03-11 01:15:51 | INFO  | Setting property hypervisor_type: qemu 2026-03-11 01:16:16.092601 | orchestrator | 2026-03-11 01:15:51 | INFO  | Setting property os_distro: cirros 2026-03-11 01:16:16.092605 | orchestrator | 2026-03-11 01:15:51 | INFO  | Setting property os_purpose: minimal 2026-03-11 01:16:16.092609 | orchestrator | 2026-03-11 01:15:51 | INFO  | Setting property replace_frequency: never 2026-03-11 01:16:16.092613 | orchestrator | 2026-03-11 01:15:52 | INFO  | Setting property uuid_validity: none 2026-03-11 01:16:16.092617 | orchestrator | 2026-03-11 01:15:52 | INFO  | Setting property provided_until: none 2026-03-11 01:16:16.092620 | orchestrator | 2026-03-11 01:15:52 | INFO  | Setting property image_description: Cirros 2026-03-11 01:16:16.092624 | orchestrator | 2026-03-11 01:15:52 | INFO  | Setting property image_name: Cirros 2026-03-11 01:16:16.092642 | orchestrator | 2026-03-11 01:15:53 | INFO  | Setting property internal_version: 0.6.2 2026-03-11 01:16:16.092646 | orchestrator | 2026-03-11 01:15:53 | INFO  | Setting property image_original_user: cirros 2026-03-11 01:16:16.092650 | orchestrator | 2026-03-11 01:15:53 | INFO  | Setting property os_version: 0.6.2 2026-03-11 01:16:16.092655 | orchestrator | 2026-03-11 01:15:53 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.2/cirros-0.6.2-x86_64-disk.img 2026-03-11 01:16:16.092660 | orchestrator | 2026-03-11 01:15:54 | INFO  | Setting property image_build_date: 2023-05-30 2026-03-11 01:16:16.092664 | orchestrator | 2026-03-11 01:15:54 | INFO  | Checking status of 'Cirros 0.6.2' 2026-03-11 01:16:16.092667 | orchestrator | 2026-03-11 01:15:54 | INFO  | Checking visibility of 'Cirros 0.6.2' 2026-03-11 01:16:16.092675 | orchestrator | 2026-03-11 01:15:54 | INFO  | Setting visibility of 'Cirros 0.6.2' to 'public' 2026-03-11 01:16:16.092678 | orchestrator | 2026-03-11 01:15:54 | INFO  | Processing image 'Cirros 0.6.3' 2026-03-11 01:16:16.092682 | orchestrator | 2026-03-11 01:15:55 | INFO  | Tested URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img: 302 2026-03-11 01:16:16.092686 | orchestrator | 2026-03-11 01:15:55 | INFO  | Importing image Cirros 0.6.3 2026-03-11 01:16:16.092690 | orchestrator | 2026-03-11 01:15:55 | INFO  | Importing from URL https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-11 01:16:16.092694 | orchestrator | 2026-03-11 01:15:56 | INFO  | Waiting for image to leave queued state... 2026-03-11 01:16:16.092697 | orchestrator | 2026-03-11 01:15:58 | INFO  | Waiting for import to complete... 2026-03-11 01:16:16.092711 | orchestrator | 2026-03-11 01:16:09 | INFO  | Import of 'Cirros 0.6.3' successfully completed, reloading images 2026-03-11 01:16:16.092715 | orchestrator | 2026-03-11 01:16:09 | INFO  | Checking parameters of 'Cirros 0.6.3' 2026-03-11 01:16:16.092719 | orchestrator | 2026-03-11 01:16:09 | INFO  | Setting internal_version = 0.6.3 2026-03-11 01:16:16.092722 | orchestrator | 2026-03-11 01:16:09 | INFO  | Setting image_original_user = cirros 2026-03-11 01:16:16.092726 | orchestrator | 2026-03-11 01:16:09 | INFO  | Adding tag os:cirros 2026-03-11 01:16:16.092730 | orchestrator | 2026-03-11 01:16:10 | INFO  | Setting property architecture: x86_64 2026-03-11 01:16:16.092734 | orchestrator | 2026-03-11 01:16:10 | INFO  | Setting property hw_disk_bus: scsi 2026-03-11 01:16:16.092737 | orchestrator | 2026-03-11 01:16:10 | INFO  | Setting property hw_rng_model: virtio 2026-03-11 01:16:16.092741 | orchestrator | 2026-03-11 01:16:10 | INFO  | Setting property hw_scsi_model: virtio-scsi 2026-03-11 01:16:16.092745 | orchestrator | 2026-03-11 01:16:11 | INFO  | Setting property hw_watchdog_action: reset 2026-03-11 01:16:16.092749 | orchestrator | 2026-03-11 01:16:11 | INFO  | Setting property hypervisor_type: qemu 2026-03-11 01:16:16.092752 | orchestrator | 2026-03-11 01:16:11 | INFO  | Setting property os_distro: cirros 2026-03-11 01:16:16.092756 | orchestrator | 2026-03-11 01:16:11 | INFO  | Setting property os_purpose: minimal 2026-03-11 01:16:16.092760 | orchestrator | 2026-03-11 01:16:12 | INFO  | Setting property replace_frequency: never 2026-03-11 01:16:16.092764 | orchestrator | 2026-03-11 01:16:12 | INFO  | Setting property uuid_validity: none 2026-03-11 01:16:16.092767 | orchestrator | 2026-03-11 01:16:12 | INFO  | Setting property provided_until: none 2026-03-11 01:16:16.092771 | orchestrator | 2026-03-11 01:16:13 | INFO  | Setting property image_description: Cirros 2026-03-11 01:16:16.092779 | orchestrator | 2026-03-11 01:16:13 | INFO  | Setting property image_name: Cirros 2026-03-11 01:16:16.092782 | orchestrator | 2026-03-11 01:16:13 | INFO  | Setting property internal_version: 0.6.3 2026-03-11 01:16:16.092786 | orchestrator | 2026-03-11 01:16:13 | INFO  | Setting property image_original_user: cirros 2026-03-11 01:16:16.092790 | orchestrator | 2026-03-11 01:16:14 | INFO  | Setting property os_version: 0.6.3 2026-03-11 01:16:16.092794 | orchestrator | 2026-03-11 01:16:14 | INFO  | Setting property image_source: https://github.com/cirros-dev/cirros/releases/download/0.6.3/cirros-0.6.3-x86_64-disk.img 2026-03-11 01:16:16.092797 | orchestrator | 2026-03-11 01:16:14 | INFO  | Setting property image_build_date: 2024-09-26 2026-03-11 01:16:16.092801 | orchestrator | 2026-03-11 01:16:15 | INFO  | Checking status of 'Cirros 0.6.3' 2026-03-11 01:16:16.092805 | orchestrator | 2026-03-11 01:16:15 | INFO  | Checking visibility of 'Cirros 0.6.3' 2026-03-11 01:16:16.092809 | orchestrator | 2026-03-11 01:16:15 | INFO  | Setting visibility of 'Cirros 0.6.3' to 'public' 2026-03-11 01:16:16.371728 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh 2026-03-11 01:16:18.760669 | orchestrator | 2026-03-11 01:16:18 | INFO  | date: 2026-03-10 2026-03-11 01:16:18.760731 | orchestrator | 2026-03-11 01:16:18 | INFO  | image: octavia-amphora-haproxy-2024.2.20260310.qcow2 2026-03-11 01:16:18.761609 | orchestrator | 2026-03-11 01:16:18 | INFO  | url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260310.qcow2 2026-03-11 01:16:18.761736 | orchestrator | 2026-03-11 01:16:18 | INFO  | checksum_url: https://nbg1.your-objectstorage.com/osism/openstack-octavia-amphora-image/octavia-amphora-haproxy-2024.2.20260310.qcow2.CHECKSUM 2026-03-11 01:16:18.862547 | orchestrator | 2026-03-11 01:16:18 | INFO  | checksum: localhost | ok: "/var/lib/zuul/builds/60b46b9ceeea47e7bd8f6c4f3c34d8fb/work/logs" 2026-03-11 01:16:52.303199 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/60b46b9ceeea47e7bd8f6c4f3c34d8fb/work/artifacts" 2026-03-11 01:16:52.593216 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/60b46b9ceeea47e7bd8f6c4f3c34d8fb/work/docs" 2026-03-11 01:16:52.608704 | 2026-03-11 01:16:52.608919 | LOOP [fetch-output : Collect logs, artifacts and docs] 2026-03-11 01:16:53.599222 | orchestrator | changed: .d..t...... ./ 2026-03-11 01:16:53.599507 | orchestrator | changed: All items complete 2026-03-11 01:16:53.599552 | 2026-03-11 01:16:54.309069 | orchestrator | changed: .d..t...... ./ 2026-03-11 01:16:55.026201 | orchestrator | changed: .d..t...... ./ 2026-03-11 01:16:55.052545 | 2026-03-11 01:16:55.052702 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2026-03-11 01:16:55.081823 | orchestrator | skipping: Conditional result was False 2026-03-11 01:16:55.084818 | orchestrator | skipping: Conditional result was False 2026-03-11 01:16:55.098917 | 2026-03-11 01:16:55.099058 | PLAY RECAP 2026-03-11 01:16:55.099128 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2026-03-11 01:16:55.099162 | 2026-03-11 01:16:55.248732 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2026-03-11 01:16:55.250286 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-11 01:16:56.161348 | 2026-03-11 01:16:56.161517 | PLAY [Base post] 2026-03-11 01:16:56.177054 | 2026-03-11 01:16:56.177211 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2026-03-11 01:16:57.691583 | orchestrator | changed 2026-03-11 01:16:57.700393 | 2026-03-11 01:16:57.700523 | PLAY RECAP 2026-03-11 01:16:57.700591 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2026-03-11 01:16:57.700657 | 2026-03-11 01:16:57.860575 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2026-03-11 01:16:57.865199 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2026-03-11 01:16:58.705597 | 2026-03-11 01:16:58.705768 | PLAY [Base post-logs] 2026-03-11 01:16:58.717549 | 2026-03-11 01:16:58.717707 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2026-03-11 01:16:59.213470 | localhost | changed 2026-03-11 01:16:59.226405 | 2026-03-11 01:16:59.226580 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2026-03-11 01:16:59.257933 | localhost | ok 2026-03-11 01:16:59.261332 | 2026-03-11 01:16:59.261436 | TASK [Set zuul-log-path fact] 2026-03-11 01:16:59.286404 | localhost | ok 2026-03-11 01:16:59.295827 | 2026-03-11 01:16:59.295946 | TASK [set-zuul-log-path-fact : Set log path for a build] 2026-03-11 01:16:59.332020 | localhost | ok 2026-03-11 01:16:59.336527 | 2026-03-11 01:16:59.336653 | TASK [upload-logs : Create log directories] 2026-03-11 01:16:59.890458 | localhost | changed 2026-03-11 01:16:59.895907 | 2026-03-11 01:16:59.896149 | TASK [upload-logs : Ensure logs are readable before uploading] 2026-03-11 01:17:00.421042 | localhost -> localhost | ok: Runtime: 0:00:00.007544 2026-03-11 01:17:00.430603 | 2026-03-11 01:17:00.430831 | TASK [upload-logs : Upload logs to log server] 2026-03-11 01:17:01.058788 | localhost | Output suppressed because no_log was given 2026-03-11 01:17:01.061904 | 2026-03-11 01:17:01.062074 | LOOP [upload-logs : Compress console log and json output] 2026-03-11 01:17:01.124631 | localhost | skipping: Conditional result was False 2026-03-11 01:17:01.130886 | localhost | skipping: Conditional result was False 2026-03-11 01:17:01.143824 | 2026-03-11 01:17:01.143976 | LOOP [upload-logs : Upload compressed console log and json output] 2026-03-11 01:17:01.197988 | localhost | skipping: Conditional result was False 2026-03-11 01:17:01.198348 | 2026-03-11 01:17:01.203520 | localhost | skipping: Conditional result was False 2026-03-11 01:17:01.211687 | 2026-03-11 01:17:01.211813 | LOOP [upload-logs : Upload console log and json output]